Jul 2 09:47:54.598220 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 09:47:54.598233 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 09:47:54.598246 kernel: BIOS-provided physical RAM map: Jul 2 09:47:54.598250 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 2 09:47:54.598254 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 2 09:47:54.598258 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 2 09:47:54.598263 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 2 09:47:54.598267 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 2 09:47:54.598271 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b18fff] usable Jul 2 09:47:54.598275 kernel: BIOS-e820: [mem 0x0000000081b19000-0x0000000081b19fff] ACPI NVS Jul 2 09:47:54.598280 kernel: BIOS-e820: [mem 0x0000000081b1a000-0x0000000081b1afff] reserved Jul 2 09:47:54.598284 kernel: BIOS-e820: [mem 0x0000000081b1b000-0x000000008afc4fff] usable Jul 2 09:47:54.598288 kernel: BIOS-e820: [mem 0x000000008afc5000-0x000000008c0a9fff] reserved Jul 2 09:47:54.598292 kernel: BIOS-e820: [mem 0x000000008c0aa000-0x000000008c232fff] usable Jul 2 09:47:54.598297 kernel: BIOS-e820: [mem 0x000000008c233000-0x000000008c664fff] ACPI NVS Jul 2 09:47:54.598302 kernel: BIOS-e820: [mem 0x000000008c665000-0x000000008eefefff] reserved Jul 2 09:47:54.598306 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 2 09:47:54.598311 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 2 09:47:54.598315 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 2 09:47:54.598320 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 2 09:47:54.598324 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 2 09:47:54.598328 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 2 09:47:54.598333 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 2 09:47:54.598337 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 2 09:47:54.598341 kernel: NX (Execute Disable) protection: active Jul 2 09:47:54.598346 kernel: SMBIOS 3.2.1 present. Jul 2 09:47:54.598351 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jul 2 09:47:54.598355 kernel: tsc: Detected 3400.000 MHz processor Jul 2 09:47:54.598360 kernel: tsc: Detected 3399.906 MHz TSC Jul 2 09:47:54.598364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 09:47:54.598369 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 09:47:54.598374 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 2 09:47:54.598378 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 09:47:54.598383 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 2 09:47:54.598387 kernel: Using GB pages for direct mapping Jul 2 09:47:54.598392 kernel: ACPI: Early table checksum verification disabled Jul 2 09:47:54.598397 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 2 09:47:54.598401 kernel: ACPI: XSDT 0x000000008C5460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 2 09:47:54.598406 kernel: ACPI: FACP 0x000000008C582670 000114 (v06 01072009 AMI 00010013) Jul 2 09:47:54.598411 kernel: ACPI: DSDT 0x000000008C546268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 2 09:47:54.598417 kernel: ACPI: FACS 0x000000008C664F80 000040 Jul 2 09:47:54.598422 kernel: ACPI: APIC 0x000000008C582788 00012C (v04 01072009 AMI 00010013) Jul 2 09:47:54.598427 kernel: ACPI: FPDT 0x000000008C5828B8 000044 (v01 01072009 AMI 00010013) Jul 2 09:47:54.598432 kernel: ACPI: FIDT 0x000000008C582900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 2 09:47:54.598437 kernel: ACPI: MCFG 0x000000008C5829A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 2 09:47:54.598442 kernel: ACPI: SPMI 0x000000008C5829E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 2 09:47:54.598447 kernel: ACPI: SSDT 0x000000008C582A28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 2 09:47:54.598452 kernel: ACPI: SSDT 0x000000008C584548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 2 09:47:54.598457 kernel: ACPI: SSDT 0x000000008C587710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 2 09:47:54.598462 kernel: ACPI: HPET 0x000000008C589A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 09:47:54.598467 kernel: ACPI: SSDT 0x000000008C589A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 2 09:47:54.598472 kernel: ACPI: SSDT 0x000000008C58AA28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 2 09:47:54.598477 kernel: ACPI: UEFI 0x000000008C58B320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 09:47:54.598482 kernel: ACPI: LPIT 0x000000008C58B368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 09:47:54.598487 kernel: ACPI: SSDT 0x000000008C58B400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 2 09:47:54.598492 kernel: ACPI: SSDT 0x000000008C58DBE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 2 09:47:54.598496 kernel: ACPI: DBGP 0x000000008C58F0C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 09:47:54.598501 kernel: ACPI: DBG2 0x000000008C58F100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 2 09:47:54.598507 kernel: ACPI: SSDT 0x000000008C58F158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 2 09:47:54.598512 kernel: ACPI: DMAR 0x000000008C590CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 2 09:47:54.598517 kernel: ACPI: SSDT 0x000000008C590D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 2 09:47:54.598522 kernel: ACPI: TPM2 0x000000008C590E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 2 09:47:54.598527 kernel: ACPI: SSDT 0x000000008C590EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 2 09:47:54.598531 kernel: ACPI: WSMT 0x000000008C591C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 2 09:47:54.598536 kernel: ACPI: EINJ 0x000000008C591C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 2 09:47:54.598541 kernel: ACPI: ERST 0x000000008C591D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 2 09:47:54.598546 kernel: ACPI: BERT 0x000000008C591FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 2 09:47:54.598552 kernel: ACPI: HEST 0x000000008C591FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 2 09:47:54.598556 kernel: ACPI: SSDT 0x000000008C592278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 2 09:47:54.598561 kernel: ACPI: Reserving FACP table memory at [mem 0x8c582670-0x8c582783] Jul 2 09:47:54.598566 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c546268-0x8c58266b] Jul 2 09:47:54.598571 kernel: ACPI: Reserving FACS table memory at [mem 0x8c664f80-0x8c664fbf] Jul 2 09:47:54.598576 kernel: ACPI: Reserving APIC table memory at [mem 0x8c582788-0x8c5828b3] Jul 2 09:47:54.598581 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c5828b8-0x8c5828fb] Jul 2 09:47:54.598586 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c582900-0x8c58299b] Jul 2 09:47:54.598591 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c5829a0-0x8c5829db] Jul 2 09:47:54.598596 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c5829e0-0x8c582a20] Jul 2 09:47:54.598601 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c582a28-0x8c584543] Jul 2 09:47:54.598606 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c584548-0x8c58770d] Jul 2 09:47:54.598611 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c587710-0x8c589a3a] Jul 2 09:47:54.598615 kernel: ACPI: Reserving HPET table memory at [mem 0x8c589a40-0x8c589a77] Jul 2 09:47:54.598620 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c589a78-0x8c58aa25] Jul 2 09:47:54.598625 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58b31b] Jul 2 09:47:54.598630 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c58b320-0x8c58b361] Jul 2 09:47:54.598636 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c58b368-0x8c58b3fb] Jul 2 09:47:54.598641 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b400-0x8c58dbdd] Jul 2 09:47:54.598645 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58dbe0-0x8c58f0c1] Jul 2 09:47:54.598650 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c58f0c8-0x8c58f0fb] Jul 2 09:47:54.598655 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c58f100-0x8c58f153] Jul 2 09:47:54.598660 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f158-0x8c590cbe] Jul 2 09:47:54.598665 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c590cc0-0x8c590d2f] Jul 2 09:47:54.598669 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590d30-0x8c590e73] Jul 2 09:47:54.598674 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c590e78-0x8c590eab] Jul 2 09:47:54.598680 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590eb0-0x8c591c3e] Jul 2 09:47:54.598685 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c591c40-0x8c591c67] Jul 2 09:47:54.598689 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c591c68-0x8c591d97] Jul 2 09:47:54.598694 kernel: ACPI: Reserving ERST table memory at [mem 0x8c591d98-0x8c591fc7] Jul 2 09:47:54.598699 kernel: ACPI: Reserving BERT table memory at [mem 0x8c591fc8-0x8c591ff7] Jul 2 09:47:54.598704 kernel: ACPI: Reserving HEST table memory at [mem 0x8c591ff8-0x8c592273] Jul 2 09:47:54.598709 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592278-0x8c5923d9] Jul 2 09:47:54.598714 kernel: No NUMA configuration found Jul 2 09:47:54.598718 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 2 09:47:54.598723 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jul 2 09:47:54.598729 kernel: Zone ranges: Jul 2 09:47:54.598734 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 09:47:54.598739 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 09:47:54.598743 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 2 09:47:54.598748 kernel: Movable zone start for each node Jul 2 09:47:54.598753 kernel: Early memory node ranges Jul 2 09:47:54.598758 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 2 09:47:54.598763 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 2 09:47:54.598768 kernel: node 0: [mem 0x0000000040400000-0x0000000081b18fff] Jul 2 09:47:54.598773 kernel: node 0: [mem 0x0000000081b1b000-0x000000008afc4fff] Jul 2 09:47:54.598778 kernel: node 0: [mem 0x000000008c0aa000-0x000000008c232fff] Jul 2 09:47:54.598783 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 2 09:47:54.598788 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 2 09:47:54.598792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 2 09:47:54.598798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 09:47:54.598806 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 2 09:47:54.598812 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 2 09:47:54.598817 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 2 09:47:54.598822 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 2 09:47:54.598828 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Jul 2 09:47:54.598833 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 2 09:47:54.598839 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 2 09:47:54.598844 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 2 09:47:54.598849 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 2 09:47:54.598854 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 2 09:47:54.598859 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 2 09:47:54.598865 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 2 09:47:54.598870 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 2 09:47:54.598875 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 2 09:47:54.598880 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 2 09:47:54.598886 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 2 09:47:54.598891 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 2 09:47:54.598896 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 2 09:47:54.598901 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 2 09:47:54.598906 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 2 09:47:54.598912 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 2 09:47:54.598917 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 2 09:47:54.598923 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 2 09:47:54.598928 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 2 09:47:54.598933 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 2 09:47:54.598938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 09:47:54.598943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 09:47:54.598948 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 09:47:54.598954 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 09:47:54.598960 kernel: TSC deadline timer available Jul 2 09:47:54.598965 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 2 09:47:54.598970 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 2 09:47:54.598975 kernel: Booting paravirtualized kernel on bare hardware Jul 2 09:47:54.598981 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 09:47:54.598986 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 2 09:47:54.598991 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 09:47:54.598996 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 09:47:54.599001 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 09:47:54.599007 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232407 Jul 2 09:47:54.599012 kernel: Policy zone: Normal Jul 2 09:47:54.599018 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 09:47:54.599024 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:47:54.599029 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 2 09:47:54.599034 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 2 09:47:54.599039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:47:54.599045 kernel: Memory: 32722572K/33452948K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 730116K reserved, 0K cma-reserved) Jul 2 09:47:54.599051 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 09:47:54.599056 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 09:47:54.599061 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 09:47:54.599066 kernel: rcu: Hierarchical RCU implementation. Jul 2 09:47:54.599072 kernel: rcu: RCU event tracing is enabled. Jul 2 09:47:54.599077 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 09:47:54.599082 kernel: Rude variant of Tasks RCU enabled. Jul 2 09:47:54.599088 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:47:54.599093 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:47:54.599099 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 09:47:54.599104 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 2 09:47:54.599109 kernel: random: crng init done Jul 2 09:47:54.599114 kernel: Console: colour dummy device 80x25 Jul 2 09:47:54.599120 kernel: printk: console [tty0] enabled Jul 2 09:47:54.599125 kernel: printk: console [ttyS1] enabled Jul 2 09:47:54.599130 kernel: ACPI: Core revision 20210730 Jul 2 09:47:54.599135 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 2 09:47:54.599141 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 09:47:54.599147 kernel: DMAR: Host address width 39 Jul 2 09:47:54.599152 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 2 09:47:54.599157 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 2 09:47:54.599162 kernel: DMAR: RMRR base: 0x0000008cf10000 end: 0x0000008d159fff Jul 2 09:47:54.599167 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 2 09:47:54.599173 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 2 09:47:54.599178 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 2 09:47:54.599183 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 2 09:47:54.599188 kernel: x2apic enabled Jul 2 09:47:54.599194 kernel: Switched APIC routing to cluster x2apic. Jul 2 09:47:54.599199 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 2 09:47:54.599205 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 2 09:47:54.599210 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 2 09:47:54.599215 kernel: process: using mwait in idle threads Jul 2 09:47:54.599220 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 09:47:54.599225 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 09:47:54.599230 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 09:47:54.599243 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 09:47:54.599249 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 09:47:54.599255 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 09:47:54.599260 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 2 09:47:54.599265 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 09:47:54.599270 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 2 09:47:54.599276 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 2 09:47:54.599281 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 09:47:54.599286 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 09:47:54.599291 kernel: TAA: Mitigation: TSX disabled Jul 2 09:47:54.599296 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 2 09:47:54.599301 kernel: SRBDS: Mitigation: Microcode Jul 2 09:47:54.599307 kernel: GDS: Vulnerable: No microcode Jul 2 09:47:54.599312 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 09:47:54.599318 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 09:47:54.599323 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 09:47:54.599328 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 09:47:54.599333 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 09:47:54.599338 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 09:47:54.599343 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 09:47:54.599348 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 09:47:54.599353 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 2 09:47:54.599359 kernel: Freeing SMP alternatives memory: 32K Jul 2 09:47:54.599364 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:47:54.599370 kernel: LSM: Security Framework initializing Jul 2 09:47:54.599375 kernel: SELinux: Initializing. Jul 2 09:47:54.599380 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:47:54.599385 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:47:54.599390 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 2 09:47:54.599395 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 2 09:47:54.599401 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 2 09:47:54.599406 kernel: ... version: 4 Jul 2 09:47:54.599411 kernel: ... bit width: 48 Jul 2 09:47:54.599416 kernel: ... generic registers: 4 Jul 2 09:47:54.599422 kernel: ... value mask: 0000ffffffffffff Jul 2 09:47:54.599427 kernel: ... max period: 00007fffffffffff Jul 2 09:47:54.599433 kernel: ... fixed-purpose events: 3 Jul 2 09:47:54.599438 kernel: ... event mask: 000000070000000f Jul 2 09:47:54.599443 kernel: signal: max sigframe size: 2032 Jul 2 09:47:54.599448 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:47:54.599453 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 2 09:47:54.599459 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:47:54.599464 kernel: x86: Booting SMP configuration: Jul 2 09:47:54.599470 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Jul 2 09:47:54.599475 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 09:47:54.599481 kernel: #9 #10 #11 #12 #13 #14 #15 Jul 2 09:47:54.599486 kernel: smp: Brought up 1 node, 16 CPUs Jul 2 09:47:54.599491 kernel: smpboot: Max logical packages: 1 Jul 2 09:47:54.599496 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 2 09:47:54.599501 kernel: devtmpfs: initialized Jul 2 09:47:54.599506 kernel: x86/mm: Memory block size: 128MB Jul 2 09:47:54.599512 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b19000-0x81b19fff] (4096 bytes) Jul 2 09:47:54.599517 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c233000-0x8c664fff] (4399104 bytes) Jul 2 09:47:54.599523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:47:54.599528 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 09:47:54.599533 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:47:54.599539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:47:54.599544 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:47:54.599549 kernel: audit: type=2000 audit(1719913669.041:1): state=initialized audit_enabled=0 res=1 Jul 2 09:47:54.599554 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:47:54.599559 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 09:47:54.599565 kernel: cpuidle: using governor menu Jul 2 09:47:54.599570 kernel: ACPI: bus type PCI registered Jul 2 09:47:54.599575 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:47:54.599581 kernel: dca service started, version 1.12.1 Jul 2 09:47:54.599586 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 2 09:47:54.599591 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Jul 2 09:47:54.599596 kernel: PCI: Using configuration type 1 for base access Jul 2 09:47:54.599601 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 2 09:47:54.599606 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 09:47:54.599612 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:47:54.599617 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:47:54.599623 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:47:54.599628 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:47:54.599633 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:47:54.599638 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:47:54.599643 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 09:47:54.599648 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 09:47:54.599654 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 09:47:54.599660 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 2 09:47:54.599665 kernel: ACPI: Dynamic OEM Table Load: Jul 2 09:47:54.599670 kernel: ACPI: SSDT 0xFFFF940240220F00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 2 09:47:54.599675 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Jul 2 09:47:54.599681 kernel: ACPI: Dynamic OEM Table Load: Jul 2 09:47:54.599686 kernel: ACPI: SSDT 0xFFFF940241AE8800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 2 09:47:54.599691 kernel: ACPI: Dynamic OEM Table Load: Jul 2 09:47:54.599696 kernel: ACPI: SSDT 0xFFFF940241A61800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 2 09:47:54.599701 kernel: ACPI: Dynamic OEM Table Load: Jul 2 09:47:54.599706 kernel: ACPI: SSDT 0xFFFF940241B57800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 2 09:47:54.599712 kernel: ACPI: Dynamic OEM Table Load: Jul 2 09:47:54.599717 kernel: ACPI: SSDT 0xFFFF940240150000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 2 09:47:54.599722 kernel: ACPI: Dynamic OEM Table Load: Jul 2 09:47:54.599728 kernel: ACPI: SSDT 0xFFFF940241AEA000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 2 09:47:54.599733 kernel: ACPI: Interpreter enabled Jul 2 09:47:54.599738 kernel: ACPI: PM: (supports S0 S5) Jul 2 09:47:54.599743 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 09:47:54.599748 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 2 09:47:54.599753 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 2 09:47:54.599759 kernel: HEST: Table parsing has been initialized. Jul 2 09:47:54.599764 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 2 09:47:54.599770 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 09:47:54.599775 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 2 09:47:54.599780 kernel: ACPI: PM: Power Resource [USBC] Jul 2 09:47:54.599785 kernel: ACPI: PM: Power Resource [V0PR] Jul 2 09:47:54.599790 kernel: ACPI: PM: Power Resource [V1PR] Jul 2 09:47:54.599796 kernel: ACPI: PM: Power Resource [V2PR] Jul 2 09:47:54.599801 kernel: ACPI: PM: Power Resource [WRST] Jul 2 09:47:54.599807 kernel: ACPI: PM: Power Resource [FN00] Jul 2 09:47:54.599812 kernel: ACPI: PM: Power Resource [FN01] Jul 2 09:47:54.599817 kernel: ACPI: PM: Power Resource [FN02] Jul 2 09:47:54.599822 kernel: ACPI: PM: Power Resource [FN03] Jul 2 09:47:54.599827 kernel: ACPI: PM: Power Resource [FN04] Jul 2 09:47:54.599832 kernel: ACPI: PM: Power Resource [PIN] Jul 2 09:47:54.599837 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 2 09:47:54.599905 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 09:47:54.599956 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 2 09:47:54.599999 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 2 09:47:54.600007 kernel: PCI host bridge to bus 0000:00 Jul 2 09:47:54.600051 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 09:47:54.600091 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 09:47:54.600131 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 09:47:54.600170 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 2 09:47:54.600211 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 2 09:47:54.600254 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 2 09:47:54.600309 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 2 09:47:54.600361 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 2 09:47:54.600408 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.600457 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jul 2 09:47:54.600505 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x95520000-0x95520fff 64bit] Jul 2 09:47:54.600553 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 2 09:47:54.600597 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jul 2 09:47:54.600649 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 2 09:47:54.600693 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jul 2 09:47:54.600738 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 2 09:47:54.600787 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 2 09:47:54.600832 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jul 2 09:47:54.600875 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551e000-0x9551efff 64bit] Jul 2 09:47:54.600923 kernel: pci 0000:00:14.5: [8086:a375] type 00 class 0x080501 Jul 2 09:47:54.600968 kernel: pci 0000:00:14.5: reg 0x10: [mem 0x9551d000-0x9551dfff 64bit] Jul 2 09:47:54.601018 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 2 09:47:54.601066 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 09:47:54.601113 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 2 09:47:54.601157 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 09:47:54.601204 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 2 09:47:54.601255 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jul 2 09:47:54.601301 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 2 09:47:54.601348 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 2 09:47:54.601395 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jul 2 09:47:54.601439 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 2 09:47:54.601486 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 2 09:47:54.601531 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jul 2 09:47:54.601575 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 2 09:47:54.601623 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jul 2 09:47:54.601675 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jul 2 09:47:54.601722 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jul 2 09:47:54.601766 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jul 2 09:47:54.601810 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jul 2 09:47:54.601853 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jul 2 09:47:54.601898 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jul 2 09:47:54.601941 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 2 09:47:54.601992 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 2 09:47:54.602040 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.602089 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 2 09:47:54.602137 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.602185 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 2 09:47:54.602232 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.602290 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 2 09:47:54.602337 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.602388 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jul 2 09:47:54.602433 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.602484 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 2 09:47:54.602529 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 09:47:54.602578 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 2 09:47:54.602629 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 2 09:47:54.602674 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jul 2 09:47:54.602718 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 2 09:47:54.602767 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 2 09:47:54.602813 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 2 09:47:54.602864 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 2 09:47:54.602912 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 2 09:47:54.602958 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jul 2 09:47:54.603005 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 2 09:47:54.603050 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 09:47:54.603099 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 09:47:54.603150 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 2 09:47:54.603197 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 2 09:47:54.603247 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jul 2 09:47:54.603295 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 2 09:47:54.603341 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 09:47:54.603387 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 09:47:54.603435 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 09:47:54.603481 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 09:47:54.603526 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 09:47:54.603571 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 09:47:54.603623 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 2 09:47:54.603670 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jul 2 09:47:54.603716 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jul 2 09:47:54.603762 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jul 2 09:47:54.603811 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jul 2 09:47:54.603856 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.603901 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 09:47:54.603945 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 09:47:54.603989 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 09:47:54.604039 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 2 09:47:54.604085 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 2 09:47:54.604134 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jul 2 09:47:54.604182 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jul 2 09:47:54.604286 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jul 2 09:47:54.604332 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 2 09:47:54.604378 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 09:47:54.604423 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 09:47:54.604468 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 09:47:54.604513 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 09:47:54.604568 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jul 2 09:47:54.604616 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 2 09:47:54.604662 kernel: pci 0000:06:00.0: supports D1 D2 Jul 2 09:47:54.604709 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 09:47:54.604754 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 09:47:54.604799 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 09:47:54.604844 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 09:47:54.604896 kernel: pci_bus 0000:07: extended config space not accessible Jul 2 09:47:54.604948 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jul 2 09:47:54.604999 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jul 2 09:47:54.605047 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jul 2 09:47:54.605095 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jul 2 09:47:54.605143 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 09:47:54.605191 kernel: pci 0000:07:00.0: supports D1 D2 Jul 2 09:47:54.605244 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 09:47:54.605293 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 09:47:54.605339 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 09:47:54.605386 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 09:47:54.605395 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 2 09:47:54.605400 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 2 09:47:54.605406 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 2 09:47:54.605411 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 2 09:47:54.605419 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 2 09:47:54.605424 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 2 09:47:54.605430 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 2 09:47:54.605435 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 2 09:47:54.605441 kernel: iommu: Default domain type: Translated Jul 2 09:47:54.605447 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 09:47:54.605493 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 2 09:47:54.605543 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 09:47:54.605590 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 2 09:47:54.605599 kernel: vgaarb: loaded Jul 2 09:47:54.605605 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 09:47:54.605611 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 09:47:54.605617 kernel: PTP clock support registered Jul 2 09:47:54.605623 kernel: PCI: Using ACPI for IRQ routing Jul 2 09:47:54.605628 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 09:47:54.605634 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 2 09:47:54.605639 kernel: e820: reserve RAM buffer [mem 0x81b19000-0x83ffffff] Jul 2 09:47:54.605645 kernel: e820: reserve RAM buffer [mem 0x8afc5000-0x8bffffff] Jul 2 09:47:54.605651 kernel: e820: reserve RAM buffer [mem 0x8c233000-0x8fffffff] Jul 2 09:47:54.605656 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 2 09:47:54.605661 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 2 09:47:54.605667 kernel: clocksource: Switched to clocksource tsc-early Jul 2 09:47:54.605672 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:47:54.605678 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:47:54.605684 kernel: pnp: PnP ACPI init Jul 2 09:47:54.605730 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 2 09:47:54.605776 kernel: pnp 00:02: [dma 0 disabled] Jul 2 09:47:54.605823 kernel: pnp 00:03: [dma 0 disabled] Jul 2 09:47:54.605868 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 2 09:47:54.605908 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 2 09:47:54.605952 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jul 2 09:47:54.605995 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jul 2 09:47:54.606038 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jul 2 09:47:54.606078 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jul 2 09:47:54.606117 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jul 2 09:47:54.606158 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 2 09:47:54.606197 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 2 09:47:54.606243 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 2 09:47:54.606286 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 2 09:47:54.606332 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jul 2 09:47:54.606373 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 2 09:47:54.606413 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 2 09:47:54.606452 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 2 09:47:54.606492 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 2 09:47:54.606531 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 2 09:47:54.606571 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jul 2 09:47:54.606616 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jul 2 09:47:54.606625 kernel: pnp: PnP ACPI: found 10 devices Jul 2 09:47:54.606631 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 09:47:54.606636 kernel: NET: Registered PF_INET protocol family Jul 2 09:47:54.606642 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:47:54.606648 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 09:47:54.606654 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:47:54.606660 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:47:54.606667 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 09:47:54.606672 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 2 09:47:54.606678 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 09:47:54.606684 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 09:47:54.606689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:47:54.606695 kernel: NET: Registered PF_XDP protocol family Jul 2 09:47:54.606739 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jul 2 09:47:54.606785 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jul 2 09:47:54.606830 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jul 2 09:47:54.606879 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 09:47:54.606925 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 09:47:54.606973 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 09:47:54.607019 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 09:47:54.607065 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 09:47:54.607110 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 09:47:54.607157 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 09:47:54.607202 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 09:47:54.607250 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 09:47:54.607296 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 09:47:54.607341 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 09:47:54.607385 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 09:47:54.607433 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 09:47:54.607477 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 09:47:54.607522 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 09:47:54.607568 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 09:47:54.607614 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 09:47:54.607660 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 09:47:54.607704 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 09:47:54.607748 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 09:47:54.607792 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 09:47:54.607834 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 2 09:47:54.607875 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 09:47:54.607914 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 09:47:54.607953 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 09:47:54.607991 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 2 09:47:54.608031 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 2 09:47:54.608077 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 2 09:47:54.608120 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 09:47:54.608166 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 2 09:47:54.608207 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 2 09:47:54.608257 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 2 09:47:54.608299 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 2 09:47:54.608344 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 2 09:47:54.608385 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 2 09:47:54.608432 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 2 09:47:54.608475 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 2 09:47:54.608482 kernel: PCI: CLS 64 bytes, default 64 Jul 2 09:47:54.608488 kernel: DMAR: No ATSR found Jul 2 09:47:54.608494 kernel: DMAR: No SATC found Jul 2 09:47:54.608500 kernel: DMAR: dmar0: Using Queued invalidation Jul 2 09:47:54.608544 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 2 09:47:54.608589 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 2 09:47:54.608636 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 2 09:47:54.608680 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 2 09:47:54.608726 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 2 09:47:54.608770 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 2 09:47:54.608813 kernel: pci 0000:00:14.5: Adding to iommu group 4 Jul 2 09:47:54.608858 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 2 09:47:54.608901 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 2 09:47:54.608946 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 2 09:47:54.608991 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 2 09:47:54.609036 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 2 09:47:54.609079 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 2 09:47:54.609124 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 2 09:47:54.609168 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 2 09:47:54.609213 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 2 09:47:54.609262 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 2 09:47:54.609307 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 2 09:47:54.609351 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 2 09:47:54.609398 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 2 09:47:54.609442 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 2 09:47:54.609486 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 2 09:47:54.609532 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 2 09:47:54.609579 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 2 09:47:54.609624 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 2 09:47:54.609671 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 2 09:47:54.609716 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 2 09:47:54.609767 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 2 09:47:54.609775 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 2 09:47:54.609781 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 09:47:54.609786 kernel: software IO TLB: mapped [mem 0x0000000086fc5000-0x000000008afc5000] (64MB) Jul 2 09:47:54.609792 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 2 09:47:54.609798 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 2 09:47:54.609803 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 2 09:47:54.609809 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 2 09:47:54.609857 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 2 09:47:54.609866 kernel: Initialise system trusted keyrings Jul 2 09:47:54.609871 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 2 09:47:54.609877 kernel: Key type asymmetric registered Jul 2 09:47:54.609882 kernel: Asymmetric key parser 'x509' registered Jul 2 09:47:54.609888 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 09:47:54.609893 kernel: io scheduler mq-deadline registered Jul 2 09:47:54.609899 kernel: io scheduler kyber registered Jul 2 09:47:54.609905 kernel: io scheduler bfq registered Jul 2 09:47:54.609951 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 2 09:47:54.609996 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 2 09:47:54.610042 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 2 09:47:54.610087 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 2 09:47:54.610131 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 2 09:47:54.610177 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 2 09:47:54.610229 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 2 09:47:54.610242 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 2 09:47:54.610248 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 2 09:47:54.610254 kernel: pstore: Registered erst as persistent store backend Jul 2 09:47:54.610260 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 09:47:54.610265 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:47:54.610271 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 09:47:54.610277 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 09:47:54.610282 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 2 09:47:54.610329 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 2 09:47:54.610339 kernel: i8042: PNP: No PS/2 controller found. Jul 2 09:47:54.610379 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 2 09:47:54.610422 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 2 09:47:54.610462 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-07-02T09:47:53 UTC (1719913673) Jul 2 09:47:54.610502 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 2 09:47:54.610510 kernel: fail to initialize ptp_kvm Jul 2 09:47:54.610516 kernel: intel_pstate: Intel P-state driver initializing Jul 2 09:47:54.610523 kernel: intel_pstate: Disabling energy efficiency optimization Jul 2 09:47:54.610528 kernel: intel_pstate: HWP enabled Jul 2 09:47:54.610534 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jul 2 09:47:54.610539 kernel: vesafb: scrolling: redraw Jul 2 09:47:54.610545 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jul 2 09:47:54.610551 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000a7e86ef7, using 768k, total 768k Jul 2 09:47:54.610556 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 09:47:54.610562 kernel: fb0: VESA VGA frame buffer device Jul 2 09:47:54.610567 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:47:54.610574 kernel: Segment Routing with IPv6 Jul 2 09:47:54.610579 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:47:54.610585 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:47:54.610590 kernel: Key type dns_resolver registered Jul 2 09:47:54.610596 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Jul 2 09:47:54.610601 kernel: microcode: Microcode Update Driver: v2.2. Jul 2 09:47:54.610607 kernel: IPI shorthand broadcast: enabled Jul 2 09:47:54.610612 kernel: sched_clock: Marking stable (1689806941, 1334901226)->(4478159283, -1453451116) Jul 2 09:47:54.610618 kernel: registered taskstats version 1 Jul 2 09:47:54.610624 kernel: Loading compiled-in X.509 certificates Jul 2 09:47:54.610630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 09:47:54.610635 kernel: Key type .fscrypt registered Jul 2 09:47:54.610640 kernel: Key type fscrypt-provisioning registered Jul 2 09:47:54.610646 kernel: pstore: Using crash dump compression: deflate Jul 2 09:47:54.610651 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:47:54.610657 kernel: ima: No architecture policies found Jul 2 09:47:54.610662 kernel: clk: Disabling unused clocks Jul 2 09:47:54.610668 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 09:47:54.610674 kernel: Write protecting the kernel read-only data: 28672k Jul 2 09:47:54.610680 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 09:47:54.610685 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 09:47:54.610691 kernel: Run /init as init process Jul 2 09:47:54.610696 kernel: with arguments: Jul 2 09:47:54.610702 kernel: /init Jul 2 09:47:54.610707 kernel: with environment: Jul 2 09:47:54.610713 kernel: HOME=/ Jul 2 09:47:54.610718 kernel: TERM=linux Jul 2 09:47:54.610724 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:47:54.610731 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 09:47:54.610738 systemd[1]: Detected architecture x86-64. Jul 2 09:47:54.610744 systemd[1]: Running in initrd. Jul 2 09:47:54.610749 systemd[1]: No hostname configured, using default hostname. Jul 2 09:47:54.610755 systemd[1]: Hostname set to . Jul 2 09:47:54.610760 systemd[1]: Initializing machine ID from random generator. Jul 2 09:47:54.610767 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:47:54.610773 systemd[1]: Started systemd-ask-password-console.path. Jul 2 09:47:54.610778 systemd[1]: Reached target cryptsetup.target. Jul 2 09:47:54.610784 systemd[1]: Reached target ignition-diskful-subsequent.target. Jul 2 09:47:54.610790 systemd[1]: Reached target paths.target. Jul 2 09:47:54.610795 systemd[1]: Reached target slices.target. Jul 2 09:47:54.610801 systemd[1]: Reached target swap.target. Jul 2 09:47:54.610806 systemd[1]: Reached target timers.target. Jul 2 09:47:54.610813 systemd[1]: Listening on iscsid.socket. Jul 2 09:47:54.610819 systemd[1]: Listening on iscsiuio.socket. Jul 2 09:47:54.610825 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 09:47:54.610830 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 09:47:54.610836 systemd[1]: Listening on systemd-journald.socket. Jul 2 09:47:54.610842 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 09:47:54.610847 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jul 2 09:47:54.610853 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jul 2 09:47:54.610860 kernel: clocksource: Switched to clocksource tsc Jul 2 09:47:54.610865 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 09:47:54.610871 systemd[1]: Reached target sockets.target. Jul 2 09:47:54.610877 systemd[1]: Starting iscsiuio.service... Jul 2 09:47:54.610883 systemd[1]: Starting kmod-static-nodes.service... Jul 2 09:47:54.610888 kernel: SCSI subsystem initialized Jul 2 09:47:54.610894 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:47:54.610900 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:47:54.610905 systemd[1]: Starting systemd-journald.service... Jul 2 09:47:54.610911 systemd[1]: Starting systemd-modules-load.service... Jul 2 09:47:54.610920 systemd-journald[267]: Journal started Jul 2 09:47:54.610947 systemd-journald[267]: Runtime Journal (/run/log/journal/fe009cdc01a942ba8d5280afeed41218) is 8.0M, max 640.0M, 632.0M free. Jul 2 09:47:54.612710 systemd-modules-load[268]: Inserted module 'overlay' Jul 2 09:47:54.636618 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 09:47:54.671274 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:47:54.671290 systemd[1]: Started iscsiuio.service. Jul 2 09:47:54.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.696286 kernel: Bridge firewalling registered Jul 2 09:47:54.696301 kernel: audit: type=1130 audit(1719913674.694:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.696310 systemd[1]: Started systemd-journald.service. Jul 2 09:47:54.756393 systemd-modules-load[268]: Inserted module 'br_netfilter' Jul 2 09:47:54.870317 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:47:54.870330 kernel: audit: type=1130 audit(1719913674.774:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.870338 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:47:54.870345 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 09:47:54.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.775500 systemd[1]: Finished kmod-static-nodes.service. Jul 2 09:47:54.924999 kernel: audit: type=1130 audit(1719913674.881:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.876188 systemd-modules-load[268]: Inserted module 'dm_multipath' Jul 2 09:47:54.976385 kernel: audit: type=1130 audit(1719913674.932:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.882537 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:47:55.030003 kernel: audit: type=1130 audit(1719913674.984:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.933522 systemd[1]: Finished systemd-modules-load.service. Jul 2 09:47:55.084317 kernel: audit: type=1130 audit(1719913675.037:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:54.985532 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 09:47:55.038808 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 09:47:55.084627 systemd[1]: Starting systemd-sysctl.service... Jul 2 09:47:55.084927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 09:47:55.087774 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 09:47:55.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.088242 systemd[1]: Finished systemd-sysctl.service. Jul 2 09:47:55.199718 kernel: audit: type=1130 audit(1719913675.086:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.199733 kernel: audit: type=1130 audit(1719913675.150:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.151634 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 09:47:55.257993 kernel: audit: type=1130 audit(1719913675.207:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.208861 systemd[1]: Starting dracut-cmdline.service... Jul 2 09:47:55.276350 kernel: iscsi: registered transport (tcp) Jul 2 09:47:55.276361 dracut-cmdline[290]: dracut-dracut-053 Jul 2 09:47:55.276361 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 09:47:55.276361 dracut-cmdline[290]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 09:47:55.358487 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:47:55.358501 kernel: QLogic iSCSI HBA Driver Jul 2 09:47:55.355135 systemd[1]: Finished dracut-cmdline.service. Jul 2 09:47:55.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.367967 systemd[1]: Starting dracut-pre-udev.service... Jul 2 09:47:55.381547 systemd[1]: Starting iscsid.service... Jul 2 09:47:55.395411 systemd[1]: Started iscsid.service. Jul 2 09:47:55.431352 kernel: raid6: avx2x4 gen() 48958 MB/s Jul 2 09:47:55.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.431429 iscsid[451]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 09:47:55.431429 iscsid[451]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 09:47:55.431429 iscsid[451]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 09:47:55.431429 iscsid[451]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 09:47:55.431429 iscsid[451]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 09:47:55.431429 iscsid[451]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 09:47:55.431429 iscsid[451]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 09:47:55.597346 kernel: raid6: avx2x4 xor() 21817 MB/s Jul 2 09:47:55.597357 kernel: raid6: avx2x2 gen() 53729 MB/s Jul 2 09:47:55.597365 kernel: raid6: avx2x2 xor() 32236 MB/s Jul 2 09:47:55.597371 kernel: raid6: avx2x1 gen() 45270 MB/s Jul 2 09:47:55.597378 kernel: raid6: avx2x1 xor() 27959 MB/s Jul 2 09:47:55.639310 kernel: raid6: sse2x4 gen() 21399 MB/s Jul 2 09:47:55.674308 kernel: raid6: sse2x4 xor() 11983 MB/s Jul 2 09:47:55.709311 kernel: raid6: sse2x2 gen() 21820 MB/s Jul 2 09:47:55.744267 kernel: raid6: sse2x2 xor() 13395 MB/s Jul 2 09:47:55.777269 kernel: raid6: sse2x1 gen() 18310 MB/s Jul 2 09:47:55.830024 kernel: raid6: sse2x1 xor() 8903 MB/s Jul 2 09:47:55.830040 kernel: raid6: using algorithm avx2x2 gen() 53729 MB/s Jul 2 09:47:55.830047 kernel: raid6: .... xor() 32236 MB/s, rmw enabled Jul 2 09:47:55.848499 kernel: raid6: using avx2x2 recovery algorithm Jul 2 09:47:55.895288 kernel: xor: automatically using best checksumming function avx Jul 2 09:47:55.974267 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 09:47:55.979425 systemd[1]: Finished dracut-pre-udev.service. Jul 2 09:47:55.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:55.987000 audit: BPF prog-id=6 op=LOAD Jul 2 09:47:55.987000 audit: BPF prog-id=7 op=LOAD Jul 2 09:47:55.989230 systemd[1]: Starting systemd-udevd.service... Jul 2 09:47:55.997635 systemd-udevd[468]: Using default interface naming scheme 'v252'. Jul 2 09:47:56.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:56.003402 systemd[1]: Started systemd-udevd.service. Jul 2 09:47:56.045366 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 2 09:47:56.020870 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 09:47:56.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:56.048290 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 09:47:56.064364 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 09:47:56.116051 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 09:47:56.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:56.116587 systemd[1]: Starting dracut-initqueue.service... Jul 2 09:47:56.145243 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 09:47:56.147244 kernel: ACPI: bus type USB registered Jul 2 09:47:56.147270 kernel: usbcore: registered new interface driver usbfs Jul 2 09:47:56.147284 kernel: usbcore: registered new interface driver hub Jul 2 09:47:56.147297 kernel: usbcore: registered new device driver usb Jul 2 09:47:56.220245 kernel: libata version 3.00 loaded. Jul 2 09:47:56.220278 kernel: sdhci: Secure Digital Host Controller Interface driver Jul 2 09:47:56.220286 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 09:47:56.220293 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 2 09:47:56.239271 kernel: sdhci: Copyright(c) Pierre Ossman Jul 2 09:47:56.306257 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 2 09:47:56.323239 kernel: AES CTR mode by8 optimization enabled Jul 2 09:47:56.359867 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 09:47:56.359951 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 2 09:47:56.360006 kernel: pps pps0: new PPS source ptp0 Jul 2 09:47:56.394718 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 2 09:47:56.394815 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 2 09:47:56.409829 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 09:47:56.409902 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 09:47:56.442373 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 2 09:47:56.442458 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:5e Jul 2 09:47:56.443246 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Jul 2 09:47:56.443439 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 09:47:56.476515 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 2 09:47:56.491946 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 2 09:47:56.509985 kernel: hub 1-0:1.0: USB hub found Jul 2 09:47:56.510097 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 09:47:56.539923 kernel: hub 1-0:1.0: 16 ports detected Jul 2 09:47:56.580238 kernel: pps pps1: new PPS source ptp1 Jul 2 09:47:56.603239 kernel: hub 2-0:1.0: USB hub found Jul 2 09:47:56.603349 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 2 09:47:56.603433 kernel: hub 2-0:1.0: 10 ports detected Jul 2 09:47:56.627356 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 09:47:56.657602 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:5f Jul 2 09:47:56.658320 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 2 09:47:56.671013 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 09:47:56.671105 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:56.704239 kernel: ahci 0000:00:17.0: version 3.0 Jul 2 09:47:56.704312 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jul 2 09:47:56.704368 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 2 09:47:56.704421 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 09:47:56.708240 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:56.720653 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 2 09:47:56.815302 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 2 09:47:56.815380 kernel: scsi host0: ahci Jul 2 09:47:56.835240 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 2 09:47:56.835266 kernel: scsi host1: ahci Jul 2 09:47:56.835280 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 09:47:56.888836 kernel: scsi host2: ahci Jul 2 09:47:56.901799 kernel: scsi host3: ahci Jul 2 09:47:56.914643 kernel: scsi host4: ahci Jul 2 09:47:56.927320 kernel: scsi host5: ahci Jul 2 09:47:56.940025 kernel: scsi host6: ahci Jul 2 09:47:56.940098 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 138 Jul 2 09:47:56.957559 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 138 Jul 2 09:47:56.974898 kernel: hub 1-14:1.0: USB hub found Jul 2 09:47:56.974978 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 138 Jul 2 09:47:57.006003 kernel: hub 1-14:1.0: 4 ports detected Jul 2 09:47:57.006078 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 138 Jul 2 09:47:57.036287 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.036358 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 09:47:57.046276 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 138 Jul 2 09:47:57.046294 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Jul 2 09:47:57.046399 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 09:47:57.075628 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.075697 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 138 Jul 2 09:47:57.184053 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 138 Jul 2 09:47:57.202286 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.326303 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 2 09:47:57.347278 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 09:47:57.383998 kernel: port_module: 9 callbacks suppressed Jul 2 09:47:57.384014 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 2 09:47:57.419313 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 09:47:57.439240 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.439352 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:47:57.511324 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 09:47:57.511367 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 2 09:47:57.527306 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 09:47:57.544279 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 09:47:57.560302 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 09:47:57.576302 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 09:47:57.593283 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 09:47:57.608276 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 2 09:47:57.626311 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 2 09:47:57.643277 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 09:47:57.664285 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 09:47:57.681293 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.681366 kernel: ata1.00: Features: NCQ-prio Jul 2 09:47:57.681375 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 09:47:57.713318 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.713387 kernel: ata2.00: Features: NCQ-prio Jul 2 09:47:57.782293 kernel: ata1.00: configured for UDMA/133 Jul 2 09:47:57.782315 kernel: ata2.00: configured for UDMA/133 Jul 2 09:47:57.782323 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 2 09:47:57.814278 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 2 09:47:57.850289 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jul 2 09:47:57.869295 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.869381 kernel: usbcore: registered new interface driver usbhid Jul 2 09:47:57.885291 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:57.885374 kernel: usbhid: USB HID core driver Jul 2 09:47:57.952304 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 2 09:47:57.952335 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jul 2 09:47:57.969241 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 09:47:57.984890 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 09:47:58.000334 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 09:47:58.000422 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 09:47:58.004239 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 2 09:47:58.004335 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:58.004392 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 2 09:47:58.004400 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 2 09:47:58.004469 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:58.036748 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 2 09:47:58.036828 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 2 09:47:58.036888 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jul 2 09:47:58.069874 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 2 09:47:58.088283 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jul 2 09:47:58.156154 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 09:47:58.156230 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 2 09:47:58.276788 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 09:47:58.276808 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 09:47:58.313316 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 09:47:58.313331 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 2 09:47:58.345444 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 09:47:58.345460 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:58.365316 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 2 09:47:58.397668 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 09:47:58.397683 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jul 2 09:47:58.433230 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:47:58.466589 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 09:47:58.490416 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (536) Jul 2 09:47:58.490346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 09:47:58.508634 systemd[1]: Finished dracut-initqueue.service. Jul 2 09:47:58.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.530246 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 09:47:58.586457 kernel: audit: type=1130 audit(1719913678.520:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.583762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 09:47:58.586599 systemd[1]: Reached target initrd-root-device.target. Jul 2 09:47:58.618448 systemd[1]: Reached target remote-fs-pre.target. Jul 2 09:47:58.626325 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 09:47:58.626449 systemd[1]: Reached target remote-fs.target. Jul 2 09:47:58.649816 systemd[1]: Starting disk-uuid.service... Jul 2 09:47:58.665963 systemd[1]: Starting dracut-pre-mount.service... Jul 2 09:47:58.786328 kernel: audit: type=1130 audit(1719913678.692:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.786342 kernel: audit: type=1131 audit(1719913678.692:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.679990 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:47:58.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.680162 systemd[1]: Finished disk-uuid.service. Jul 2 09:47:58.864488 kernel: audit: type=1130 audit(1719913678.794:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.694073 systemd[1]: Finished dracut-pre-mount.service. Jul 2 09:47:58.795540 systemd[1]: Reached target local-fs-pre.target. Jul 2 09:47:58.851456 systemd[1]: Reached target local-fs.target. Jul 2 09:47:58.851563 systemd[1]: Reached target sysinit.target. Jul 2 09:47:58.873459 systemd[1]: Reached target basic.target. Jul 2 09:47:58.887182 systemd[1]: Starting systemd-fsck-root.service... Jul 2 09:47:58.894286 systemd[1]: Starting verity-setup.service... Jul 2 09:47:58.905974 systemd-fsck[711]: ROOT: clean, 643/553520 files, 82270/553472 blocks Jul 2 09:47:58.929239 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 09:47:58.947589 systemd[1]: Finished systemd-fsck-root.service. Jul 2 09:47:59.008270 kernel: audit: type=1130 audit(1719913678.955:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:58.958148 systemd[1]: Mounting sysroot.mount... Jul 2 09:47:59.018424 systemd[1]: Found device dev-mapper-usr.device. Jul 2 09:47:59.031505 systemd[1]: Finished verity-setup.service. Jul 2 09:47:59.149471 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 09:47:59.149487 kernel: audit: type=1130 audit(1719913679.055:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.149497 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 09:47:59.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.056759 systemd[1]: Mounting sysusr-usr.mount... Jul 2 09:47:59.157032 systemd[1]: Mounted sysroot.mount. Jul 2 09:47:59.170576 systemd[1]: Mounted sysusr-usr.mount. Jul 2 09:47:59.177603 systemd[1]: Reached target initrd-root-fs.target. Jul 2 09:47:59.205399 systemd[1]: Mounting sysroot-usr.mount... Jul 2 09:47:59.213467 systemd[1]: Mounted sysroot-usr.mount. Jul 2 09:47:59.230491 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 09:47:59.242107 systemd[1]: Starting initrd-setup-root.service... Jul 2 09:47:59.350511 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jul 2 09:47:59.350530 kernel: BTRFS info (device sdb6): using free space tree Jul 2 09:47:59.350538 kernel: BTRFS info (device sdb6): has skinny extents Jul 2 09:47:59.350545 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jul 2 09:47:59.342270 systemd[1]: Finished initrd-setup-root.service. Jul 2 09:47:59.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.360619 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 09:47:59.428489 kernel: audit: type=1130 audit(1719913679.358:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.420937 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 09:47:59.437624 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 09:47:59.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.510734 initrd-setup-root-after-ignition[805]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:47:59.534465 kernel: audit: type=1130 audit(1719913679.456:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.478933 systemd[1]: Reached target ignition-subsequent.target. Jul 2 09:47:59.519844 systemd[1]: Starting initrd-parse-etc.service... Jul 2 09:47:59.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.546710 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:47:59.633507 kernel: audit: type=1130 audit(1719913679.557:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.546755 systemd[1]: Finished initrd-parse-etc.service. Jul 2 09:47:59.578537 systemd[1]: Reached target initrd-fs.target. Jul 2 09:47:59.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.619487 systemd[1]: Reached target initrd.target. Jul 2 09:47:59.619621 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 09:47:59.619963 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 09:47:59.640671 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 09:47:59.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.657208 systemd[1]: Starting initrd-cleanup.service... Jul 2 09:47:59.674528 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 09:47:59.686637 systemd[1]: Stopped target timers.target. Jul 2 09:47:59.704771 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:47:59.705068 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 09:47:59.721131 systemd[1]: Stopped target initrd.target. Jul 2 09:47:59.734898 systemd[1]: Stopped target basic.target. Jul 2 09:47:59.748907 systemd[1]: Stopped target ignition-subsequent.target. Jul 2 09:47:59.765892 systemd[1]: Stopped target ignition-diskful-subsequent.target. Jul 2 09:47:59.782902 systemd[1]: Stopped target initrd-root-device.target. Jul 2 09:47:59.799894 systemd[1]: Stopped target paths.target. Jul 2 09:47:59.813876 systemd[1]: Stopped target remote-fs.target. Jul 2 09:47:59.829897 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 09:47:59.845898 systemd[1]: Stopped target slices.target. Jul 2 09:47:59.860896 systemd[1]: Stopped target sockets.target. Jul 2 09:47:59.877880 systemd[1]: Stopped target sysinit.target. Jul 2 09:47:59.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.893889 systemd[1]: Stopped target local-fs.target. Jul 2 09:47:59.909902 systemd[1]: Stopped target local-fs-pre.target. Jul 2 09:47:59.925899 systemd[1]: Stopped target swap.target. Jul 2 09:48:00.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.940834 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:48:00.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.941066 systemd[1]: Closed iscsid.socket. Jul 2 09:48:00.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.954911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:47:59.955232 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 09:48:00.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.971110 systemd[1]: Stopped target cryptsetup.target. Jul 2 09:48:00.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.986792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:48:00.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:47:59.990453 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 09:48:00.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.001786 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:48:00.002121 systemd[1]: Stopped dracut-initqueue.service. Jul 2 09:48:00.018018 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:48:00.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.018382 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 09:48:00.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.034983 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:48:00.035324 systemd[1]: Stopped initrd-setup-root.service. Jul 2 09:48:00.051328 systemd[1]: Stopping iscsiuio.service... Jul 2 09:48:00.064437 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:48:00.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.064866 systemd[1]: Stopped systemd-sysctl.service. Jul 2 09:48:00.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.081109 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:48:00.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.081449 systemd[1]: Stopped systemd-modules-load.service. Jul 2 09:48:00.095974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:48:00.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.096310 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 09:48:00.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.110977 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:48:00.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.111339 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 09:48:00.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.130375 systemd[1]: Stopping systemd-udevd.service... Jul 2 09:48:00.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:00.146022 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 09:48:00.146445 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 09:48:00.146491 systemd[1]: Stopped iscsiuio.service. Jul 2 09:48:00.166801 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:48:00.433000 audit: BPF prog-id=7 op=UNLOAD Jul 2 09:48:00.433000 audit: BPF prog-id=6 op=UNLOAD Jul 2 09:48:00.437000 audit: BPF prog-id=5 op=UNLOAD Jul 2 09:48:00.437000 audit: BPF prog-id=4 op=UNLOAD Jul 2 09:48:00.437000 audit: BPF prog-id=3 op=UNLOAD Jul 2 09:48:00.166888 systemd[1]: Stopped systemd-udevd.service. Jul 2 09:48:00.185853 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:48:00.479841 iscsid[451]: iscsid shutting down. Jul 2 09:48:00.185930 systemd[1]: Closed iscsiuio.socket. Jul 2 09:48:00.200513 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:48:00.200610 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 09:48:00.218591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:48:00.218693 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 09:48:00.234569 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:48:00.234712 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 09:48:00.249667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:48:00.249806 systemd[1]: Stopped dracut-cmdline.service. Jul 2 09:48:00.267666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:48:00.267806 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 09:48:00.285339 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 09:48:00.301444 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:48:00.301591 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 09:48:00.317872 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:48:00.317994 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 09:48:00.335729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:48:00.480257 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Jul 2 09:48:00.335863 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 09:48:00.355103 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 09:48:00.356474 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:48:00.356753 systemd[1]: Finished initrd-cleanup.service. Jul 2 09:48:00.372089 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:48:00.372297 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 09:48:00.388498 systemd[1]: Reached target initrd-switch-root.target. Jul 2 09:48:00.403176 systemd[1]: Starting initrd-switch-root.service... Jul 2 09:48:00.425191 systemd[1]: Switching root. Jul 2 09:48:00.480453 systemd-journald[267]: Journal stopped Jul 2 09:48:04.310326 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 09:48:04.310354 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 09:48:04.310363 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 09:48:04.310369 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 09:48:04.310374 kernel: SELinux: policy capability open_perms=1 Jul 2 09:48:04.310378 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 09:48:04.310385 kernel: SELinux: policy capability always_check_network=0 Jul 2 09:48:04.310390 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 09:48:04.310395 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 09:48:04.310401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 09:48:04.310407 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 09:48:04.310413 systemd[1]: Successfully loaded SELinux policy in 289.384ms. Jul 2 09:48:04.310419 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.779ms. Jul 2 09:48:04.310426 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 09:48:04.310433 systemd[1]: Detected architecture x86-64. Jul 2 09:48:04.310440 systemd[1]: Detected first boot. Jul 2 09:48:04.310446 systemd[1]: Hostname set to . Jul 2 09:48:04.310452 systemd[1]: Initializing machine ID from random generator. Jul 2 09:48:04.310468 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 09:48:04.310474 systemd[1]: Populated /etc with preset unit settings. Jul 2 09:48:04.310480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 09:48:04.310487 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 09:48:04.310494 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:48:04.310500 systemd[1]: Queued start job for default target multi-user.target. Jul 2 09:48:04.310506 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Jul 2 09:48:04.310512 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 09:48:04.310519 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 09:48:04.310526 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 09:48:04.310533 systemd[1]: Created slice system-getty.slice. Jul 2 09:48:04.310539 systemd[1]: Created slice system-modprobe.slice. Jul 2 09:48:04.310545 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 09:48:04.310551 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 09:48:04.310557 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 09:48:04.310564 systemd[1]: Created slice user.slice. Jul 2 09:48:04.310570 systemd[1]: Started systemd-ask-password-console.path. Jul 2 09:48:04.310576 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 09:48:04.310582 systemd[1]: Set up automount boot.automount. Jul 2 09:48:04.310588 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 09:48:04.310594 systemd[1]: Reached target integritysetup.target. Jul 2 09:48:04.310600 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 09:48:04.310608 systemd[1]: Reached target remote-fs.target. Jul 2 09:48:04.310614 systemd[1]: Reached target slices.target. Jul 2 09:48:04.310620 systemd[1]: Reached target swap.target. Jul 2 09:48:04.310627 systemd[1]: Reached target torcx.target. Jul 2 09:48:04.310634 systemd[1]: Reached target veritysetup.target. Jul 2 09:48:04.310640 systemd[1]: Listening on systemd-coredump.socket. Jul 2 09:48:04.310646 systemd[1]: Listening on systemd-initctl.socket. Jul 2 09:48:04.310652 kernel: kauditd_printk_skb: 40 callbacks suppressed Jul 2 09:48:04.310668 kernel: audit: type=1400 audit(1719913683.554:61): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 09:48:04.310675 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 09:48:04.310682 kernel: audit: type=1335 audit(1719913683.554:62): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 09:48:04.310688 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 09:48:04.310695 systemd[1]: Listening on systemd-journald.socket. Jul 2 09:48:04.310701 systemd[1]: Listening on systemd-networkd.socket. Jul 2 09:48:04.310707 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 09:48:04.310714 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 09:48:04.310721 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 09:48:04.310728 systemd[1]: Mounting dev-hugepages.mount... Jul 2 09:48:04.310734 systemd[1]: Mounting dev-mqueue.mount... Jul 2 09:48:04.310740 systemd[1]: Mounting media.mount... Jul 2 09:48:04.310747 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 09:48:04.310753 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 09:48:04.310759 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 09:48:04.310766 systemd[1]: Mounting tmp.mount... Jul 2 09:48:04.310772 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 09:48:04.310780 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 09:48:04.310786 systemd[1]: Starting kmod-static-nodes.service... Jul 2 09:48:04.310792 systemd[1]: Starting modprobe@configfs.service... Jul 2 09:48:04.310799 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 09:48:04.310805 systemd[1]: Starting modprobe@drm.service... Jul 2 09:48:04.310812 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 09:48:04.310818 systemd[1]: Starting modprobe@fuse.service... Jul 2 09:48:04.310824 kernel: fuse: init (API version 7.34) Jul 2 09:48:04.310830 systemd[1]: Starting modprobe@loop.service... Jul 2 09:48:04.310838 kernel: loop: module loaded Jul 2 09:48:04.310844 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 09:48:04.310850 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 09:48:04.310857 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 09:48:04.310863 systemd[1]: Starting systemd-journald.service... Jul 2 09:48:04.310869 systemd[1]: Starting systemd-modules-load.service... Jul 2 09:48:04.310875 kernel: audit: type=1305 audit(1719913684.306:63): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 09:48:04.310883 systemd-journald[989]: Journal started Jul 2 09:48:04.310909 systemd-journald[989]: Runtime Journal (/run/log/journal/8f9c9ee9ede84ed0bcb558925dea068e) is 8.0M, max 640.0M, 632.0M free. Jul 2 09:48:03.554000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 09:48:03.554000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 09:48:04.306000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 09:48:04.306000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc104482c0 a2=4000 a3=7ffc1044835c items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 09:48:04.306000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 09:48:04.358303 kernel: audit: type=1300 audit(1719913684.306:63): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc104482c0 a2=4000 a3=7ffc1044835c items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 09:48:04.358336 kernel: audit: type=1327 audit(1719913684.306:63): proctitle="/usr/lib/systemd/systemd-journald" Jul 2 09:48:04.472449 systemd[1]: Starting systemd-network-generator.service... Jul 2 09:48:04.499283 systemd[1]: Starting systemd-remount-fs.service... Jul 2 09:48:04.526300 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 09:48:04.569286 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 09:48:04.588283 systemd[1]: Started systemd-journald.service. Jul 2 09:48:04.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.596984 systemd[1]: Mounted dev-hugepages.mount. Jul 2 09:48:04.644444 kernel: audit: type=1130 audit(1719913684.595:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.651495 systemd[1]: Mounted dev-mqueue.mount. Jul 2 09:48:04.658485 systemd[1]: Mounted media.mount. Jul 2 09:48:04.665504 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 09:48:04.674478 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 09:48:04.683501 systemd[1]: Mounted tmp.mount. Jul 2 09:48:04.690602 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 09:48:04.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.699686 systemd[1]: Finished kmod-static-nodes.service. Jul 2 09:48:04.747282 kernel: audit: type=1130 audit(1719913684.698:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.755570 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 09:48:04.755650 systemd[1]: Finished modprobe@configfs.service. Jul 2 09:48:04.804443 kernel: audit: type=1130 audit(1719913684.754:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.812588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:48:04.812664 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 09:48:04.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.863297 kernel: audit: type=1130 audit(1719913684.811:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.863317 kernel: audit: type=1131 audit(1719913684.811:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.922591 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:48:04.922666 systemd[1]: Finished modprobe@drm.service. Jul 2 09:48:04.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.931615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:48:04.931691 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 09:48:04.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.940587 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 09:48:04.940662 systemd[1]: Finished modprobe@fuse.service. Jul 2 09:48:04.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.949565 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:48:04.949650 systemd[1]: Finished modprobe@loop.service. Jul 2 09:48:04.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.958666 systemd[1]: Finished systemd-modules-load.service. Jul 2 09:48:04.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.967645 systemd[1]: Finished systemd-network-generator.service. Jul 2 09:48:04.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.976623 systemd[1]: Finished systemd-remount-fs.service. Jul 2 09:48:04.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.985682 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 09:48:04.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:04.995856 systemd[1]: Reached target network-pre.target. Jul 2 09:48:05.007815 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 09:48:05.016975 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 09:48:05.024447 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 09:48:05.025588 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 09:48:05.032968 systemd[1]: Starting systemd-journal-flush.service... Jul 2 09:48:05.036580 systemd-journald[989]: Time spent on flushing to /var/log/journal/8f9c9ee9ede84ed0bcb558925dea068e is 10.144ms for 1223 entries. Jul 2 09:48:05.036580 systemd-journald[989]: System Journal (/var/log/journal/8f9c9ee9ede84ed0bcb558925dea068e) is 8.0M, max 195.6M, 187.6M free. Jul 2 09:48:05.072635 systemd-journald[989]: Received client request to flush runtime journal. Jul 2 09:48:05.049375 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:48:05.049896 systemd[1]: Starting systemd-random-seed.service... Jul 2 09:48:05.067381 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 09:48:05.067956 systemd[1]: Starting systemd-sysctl.service... Jul 2 09:48:05.074958 systemd[1]: Starting systemd-sysusers.service... Jul 2 09:48:05.082924 systemd[1]: Starting systemd-udev-settle.service... Jul 2 09:48:05.091549 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 09:48:05.100405 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 09:48:05.108478 systemd[1]: Finished systemd-journal-flush.service. Jul 2 09:48:05.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.116433 systemd[1]: Finished systemd-random-seed.service. Jul 2 09:48:05.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.124500 systemd[1]: Finished systemd-sysctl.service. Jul 2 09:48:05.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.132456 systemd[1]: Finished systemd-sysusers.service. Jul 2 09:48:05.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.141376 systemd[1]: Reached target first-boot-complete.target. Jul 2 09:48:05.150023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 09:48:05.159485 udevadm[1016]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 09:48:05.168851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 09:48:05.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.349738 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 09:48:05.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.359210 systemd[1]: Starting systemd-udevd.service... Jul 2 09:48:05.370728 systemd-udevd[1023]: Using default interface naming scheme 'v252'. Jul 2 09:48:05.387496 systemd[1]: Started systemd-udevd.service. Jul 2 09:48:05.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.398479 systemd[1]: Found device dev-ttyS1.device. Jul 2 09:48:05.420560 systemd[1]: Starting systemd-networkd.service... Jul 2 09:48:05.424262 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 09:48:05.446245 kernel: IPMI message handler: version 39.2 Jul 2 09:48:05.446299 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 2 09:48:05.453532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 09:48:05.470243 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 09:48:05.470295 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:05.470448 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 09:48:05.470478 kernel: ACPI: button: Power Button [PWRF] Jul 2 09:48:05.509452 systemd[1]: Starting systemd-userdbd.service... Jul 2 09:48:05.555281 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:05.617246 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:05.434000 audit[1089]: AVC avc: denied { confidentiality } for pid=1089 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 09:48:05.638246 kernel: ipmi device interface Jul 2 09:48:05.638315 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:05.664216 systemd[1]: Started systemd-userdbd.service. Jul 2 09:48:05.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:05.719175 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 2 09:48:05.719322 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 2 09:48:05.742239 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jul 2 09:48:05.767238 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:05.434000 audit[1089]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f6e98096010 a1=4d8bc a2=7f6e99d30bc5 a3=5 items=42 ppid=1023 pid=1089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 09:48:05.434000 audit: CWD cwd="/" Jul 2 09:48:05.434000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=1 name=(null) inode=18566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=2 name=(null) inode=18566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=3 name=(null) inode=18567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=4 name=(null) inode=18566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=5 name=(null) inode=18568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=6 name=(null) inode=18566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=7 name=(null) inode=18569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=8 name=(null) inode=18569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=9 name=(null) inode=18570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=10 name=(null) inode=18569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=11 name=(null) inode=18571 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=12 name=(null) inode=18569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=13 name=(null) inode=18572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=14 name=(null) inode=18569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=15 name=(null) inode=18573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=16 name=(null) inode=18569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=17 name=(null) inode=18574 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=18 name=(null) inode=18566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=19 name=(null) inode=18575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=20 name=(null) inode=18575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=21 name=(null) inode=18576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=22 name=(null) inode=18575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=23 name=(null) inode=18577 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=24 name=(null) inode=18575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=25 name=(null) inode=18578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=26 name=(null) inode=18575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=27 name=(null) inode=18579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=28 name=(null) inode=18575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=29 name=(null) inode=18580 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=30 name=(null) inode=18566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=31 name=(null) inode=18581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=32 name=(null) inode=18581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=33 name=(null) inode=18582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=34 name=(null) inode=18581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=35 name=(null) inode=18583 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=36 name=(null) inode=18581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=37 name=(null) inode=18584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=38 name=(null) inode=18581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=39 name=(null) inode=18585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=40 name=(null) inode=18581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PATH item=41 name=(null) inode=18586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 09:48:05.434000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 09:48:05.813998 kernel: ipmi_si: IPMI System Interface driver Jul 2 09:48:05.814032 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 2 09:48:05.814130 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 2 09:48:05.837130 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 2 09:48:05.858631 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 2 09:48:05.858752 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:05.902571 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 2 09:48:05.926240 kernel: iTCO_vendor_support: vendor-support=0 Jul 2 09:48:05.946246 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 2 09:48:05.946502 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 2 09:48:06.017244 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 2 09:48:06.017352 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 2 09:48:06.017412 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 2 09:48:06.033396 systemd-networkd[1101]: bond0: netdev ready Jul 2 09:48:06.035495 systemd-networkd[1101]: lo: Link UP Jul 2 09:48:06.035497 systemd-networkd[1101]: lo: Gained carrier Jul 2 09:48:06.035821 systemd-networkd[1101]: Enumeration completed Jul 2 09:48:06.035922 systemd[1]: Started systemd-networkd.service. Jul 2 09:48:06.036116 systemd-networkd[1101]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 2 09:48:06.041279 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 2 09:48:06.041304 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 2 09:48:06.041317 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:06.061774 systemd-networkd[1101]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f8:2d.network. Jul 2 09:48:06.062238 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:06.106267 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 2 09:48:06.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:06.209240 kernel: intel_rapl_common: Found RAPL domain package Jul 2 09:48:06.209274 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 2 09:48:06.209370 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:06.233240 kernel: intel_rapl_common: Found RAPL domain core Jul 2 09:48:06.233270 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 09:48:06.235799 systemd-networkd[1101]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f8:2c.network. Jul 2 09:48:06.236237 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 2 09:48:06.278999 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 09:48:06.279027 kernel: intel_rapl_common: Found RAPL domain dram Jul 2 09:48:06.319238 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 2 09:48:06.379295 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:06.399239 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 09:48:06.440240 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 2 09:48:06.471285 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 2 09:48:06.495373 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 2 09:48:06.517314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Jul 2 09:48:06.517409 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:06.527166 systemd-networkd[1101]: bond0: Link UP Jul 2 09:48:06.527374 systemd-networkd[1101]: enp1s0f1np1: Link UP Jul 2 09:48:06.527533 systemd-networkd[1101]: enp1s0f0np0: Link UP Jul 2 09:48:06.527661 systemd-networkd[1101]: enp1s0f1np1: Gained carrier Jul 2 09:48:06.528606 systemd-networkd[1101]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f8:2c.network. Jul 2 09:48:06.587636 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 2 09:48:06.587661 kernel: bond0: active interface up! Jul 2 09:48:06.610237 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jul 2 09:48:06.647273 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 09:48:06.670241 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:06.674557 systemd[1]: Finished systemd-udev-settle.service. Jul 2 09:48:06.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:06.686998 systemd[1]: Starting lvm2-activation-early.service... Jul 2 09:48:06.727376 lvm[1130]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:48:06.740248 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.762249 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.785242 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.808238 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.830238 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.831751 systemd[1]: Finished lvm2-activation-early.service. Jul 2 09:48:06.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:06.849431 systemd[1]: Reached target cryptsetup.target. Jul 2 09:48:06.853238 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.871958 systemd[1]: Starting lvm2-activation.service... Jul 2 09:48:06.874143 lvm[1132]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:48:06.876241 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.898240 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.920306 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.942280 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.943705 systemd[1]: Finished lvm2-activation.service. Jul 2 09:48:06.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:06.961391 systemd[1]: Reached target local-fs-pre.target. Jul 2 09:48:06.964267 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:06.982332 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 09:48:06.982346 systemd[1]: Reached target local-fs.target. Jul 2 09:48:06.986281 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:07.003335 systemd[1]: Reached target machines.target. Jul 2 09:48:07.008276 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:07.025970 systemd[1]: Starting ldconfig.service... Jul 2 09:48:07.030282 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:07.046900 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.046921 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 09:48:07.047523 systemd[1]: Starting systemd-boot-update.service... Jul 2 09:48:07.052275 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:07.067785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 09:48:07.073240 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:07.075338 systemd-networkd[1101]: bond0: Gained carrier Jul 2 09:48:07.075471 systemd-networkd[1101]: enp1s0f0np0: Gained carrier Jul 2 09:48:07.091926 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 09:48:07.094241 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 09:48:07.094287 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Jul 2 09:48:07.109641 systemd[1]: Starting systemd-sysext.service... Jul 2 09:48:07.109828 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1135 (bootctl) Jul 2 09:48:07.110480 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 09:48:07.112206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 09:48:07.112520 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 09:48:07.125833 systemd-networkd[1101]: enp1s0f1np1: Link DOWN Jul 2 09:48:07.125847 systemd-networkd[1101]: enp1s0f1np1: Lost carrier Jul 2 09:48:07.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.136327 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 09:48:07.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.137982 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 09:48:07.139941 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 09:48:07.140056 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 09:48:07.162242 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 09:48:07.193460 systemd-fsck[1148]: fsck.fat 4.2 (2021-01-31) Jul 2 09:48:07.193460 systemd-fsck[1148]: /dev/sdb1: 789 files, 119238/258078 clusters Jul 2 09:48:07.194168 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 09:48:07.194245 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 09:48:07.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.206188 systemd[1]: Mounting boot.mount... Jul 2 09:48:07.226340 systemd[1]: Mounted boot.mount. Jul 2 09:48:07.247085 systemd[1]: Finished systemd-boot-update.service. Jul 2 09:48:07.249240 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 09:48:07.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.262562 (sd-sysext)[1158]: Using extensions 'kubernetes'. Jul 2 09:48:07.262744 (sd-sysext)[1158]: Merged extensions into '/usr'. Jul 2 09:48:07.271803 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 09:48:07.272559 systemd[1]: Mounting usr-share-oem.mount... Jul 2 09:48:07.285272 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 09:48:07.299444 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.300097 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 09:48:07.303253 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Jul 2 09:48:07.304164 systemd-networkd[1101]: enp1s0f1np1: Link UP Jul 2 09:48:07.304332 systemd-networkd[1101]: enp1s0f1np1: Gained carrier Jul 2 09:48:07.317908 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 09:48:07.322291 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 09:48:07.336864 systemd[1]: Starting modprobe@loop.service... Jul 2 09:48:07.340290 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 2 09:48:07.343708 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 09:48:07.346370 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.346441 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 09:48:07.346509 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 09:48:07.348423 systemd[1]: Finished ldconfig.service. Jul 2 09:48:07.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.355470 systemd[1]: Mounted usr-share-oem.mount. Jul 2 09:48:07.362487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:48:07.362565 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 09:48:07.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.370516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:48:07.370591 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 09:48:07.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.378549 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:48:07.378623 systemd[1]: Finished modprobe@loop.service. Jul 2 09:48:07.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.386586 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:48:07.386644 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.387126 systemd[1]: Finished systemd-sysext.service. Jul 2 09:48:07.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.396021 systemd[1]: Starting ensure-sysext.service... Jul 2 09:48:07.402848 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 09:48:07.408362 systemd-tmpfiles[1176]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 09:48:07.408896 systemd-tmpfiles[1176]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 09:48:07.409880 systemd-tmpfiles[1176]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 09:48:07.412469 systemd[1]: Reloading. Jul 2 09:48:07.432713 /usr/lib/systemd/system-generators/torcx-generator[1195]: time="2024-07-02T09:48:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 09:48:07.432729 /usr/lib/systemd/system-generators/torcx-generator[1195]: time="2024-07-02T09:48:07Z" level=info msg="torcx already run" Jul 2 09:48:07.488536 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 09:48:07.488544 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 09:48:07.500794 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:48:07.541075 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 09:48:07.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 09:48:07.550848 systemd[1]: Starting audit-rules.service... Jul 2 09:48:07.558858 systemd[1]: Starting clean-ca-certificates.service... Jul 2 09:48:07.564000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 09:48:07.564000 audit[1278]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5dd684e0 a2=420 a3=0 items=0 ppid=1261 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 09:48:07.564000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 09:48:07.566094 augenrules[1278]: No rules Jul 2 09:48:07.568032 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 09:48:07.578138 systemd[1]: Starting systemd-resolved.service... Jul 2 09:48:07.586101 systemd[1]: Starting systemd-timesyncd.service... Jul 2 09:48:07.593897 systemd[1]: Starting systemd-update-utmp.service... Jul 2 09:48:07.600670 systemd[1]: Finished audit-rules.service. Jul 2 09:48:07.607484 systemd[1]: Finished clean-ca-certificates.service. Jul 2 09:48:07.615464 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 09:48:07.628278 systemd[1]: Starting systemd-update-done.service... Jul 2 09:48:07.635319 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:48:07.635835 systemd[1]: Finished systemd-update-done.service. Jul 2 09:48:07.645409 systemd[1]: Finished systemd-update-utmp.service. Jul 2 09:48:07.654983 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.655642 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 09:48:07.662085 systemd-resolved[1286]: Positive Trust Anchors: Jul 2 09:48:07.662090 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:48:07.662109 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 09:48:07.662936 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 09:48:07.670902 systemd[1]: Starting modprobe@loop.service... Jul 2 09:48:07.677350 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.677420 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 09:48:07.677480 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:48:07.677896 systemd[1]: Started systemd-timesyncd.service. Jul 2 09:48:07.679839 systemd-resolved[1286]: Using system hostname 'ci-3510.3.5-a-ce93e9d8e3'. Jul 2 09:48:07.686976 systemd[1]: Started systemd-resolved.service. Jul 2 09:48:07.695507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:48:07.695588 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 09:48:07.703508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:48:07.703583 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 09:48:07.711507 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:48:07.711589 systemd[1]: Finished modprobe@loop.service. Jul 2 09:48:07.719528 systemd[1]: Reached target network.target. Jul 2 09:48:07.727366 systemd[1]: Reached target nss-lookup.target. Jul 2 09:48:07.735368 systemd[1]: Reached target time-set.target. Jul 2 09:48:07.743353 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:48:07.743428 systemd[1]: Reached target sysinit.target. Jul 2 09:48:07.751410 systemd[1]: Started motdgen.path. Jul 2 09:48:07.758386 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 09:48:07.768446 systemd[1]: Started logrotate.timer. Jul 2 09:48:07.775425 systemd[1]: Started mdadm.timer. Jul 2 09:48:07.782367 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 09:48:07.790460 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:48:07.790520 systemd[1]: Reached target paths.target. Jul 2 09:48:07.797372 systemd[1]: Reached target timers.target. Jul 2 09:48:07.804499 systemd[1]: Listening on dbus.socket. Jul 2 09:48:07.812018 systemd[1]: Starting docker.socket... Jul 2 09:48:07.819209 systemd[1]: Listening on sshd.socket. Jul 2 09:48:07.826411 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 09:48:07.826484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.827289 systemd[1]: Listening on docker.socket. Jul 2 09:48:07.834699 systemd[1]: Reached target sockets.target. Jul 2 09:48:07.843359 systemd[1]: Reached target basic.target. Jul 2 09:48:07.850393 systemd[1]: System is tainted: cgroupsv1 Jul 2 09:48:07.850428 systemd[1]: Stopped target timers.target. Jul 2 09:48:07.857295 systemd[1]: Stopping timers.target... Jul 2 09:48:07.864332 systemd[1]: Reached target timers.target. Jul 2 09:48:07.871351 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.871405 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 09:48:07.872024 systemd[1]: Starting containerd.service... Jul 2 09:48:07.878833 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 09:48:07.888038 systemd[1]: Starting coreos-metadata.service... Jul 2 09:48:07.894998 systemd[1]: Starting dbus.service... Jul 2 09:48:07.900979 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 09:48:07.906530 jq[1310]: false Jul 2 09:48:07.908069 systemd[1]: Starting extend-filesystems.service... Jul 2 09:48:07.915075 systemd[1]: Starting motdgen.service... Jul 2 09:48:07.916508 extend-filesystems[1314]: Found loop1 Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sda Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb1 Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb2 Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb3 Jul 2 09:48:07.933460 extend-filesystems[1314]: Found usr Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb4 Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb6 Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb7 Jul 2 09:48:07.933460 extend-filesystems[1314]: Found sdb9 Jul 2 09:48:07.933460 extend-filesystems[1314]: Checking size of /dev/sdb9 Jul 2 09:48:07.933460 extend-filesystems[1314]: Resized partition /dev/sdb9 Jul 2 09:48:08.050350 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jul 2 09:48:07.916678 dbus-daemon[1309]: [system] SELinux support is enabled Jul 2 09:48:07.922588 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 09:48:08.050560 coreos-metadata[1305]: Jul 02 09:48:07.936 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 09:48:08.050671 coreos-metadata[1306]: Jul 02 09:48:07.937 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 09:48:08.050767 extend-filesystems[1330]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 09:48:07.955138 systemd[1]: Starting sshd-keygen.service... Jul 2 09:48:07.970439 systemd[1]: Starting systemd-logind.service... Jul 2 09:48:07.987273 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 09:48:08.070572 update_engine[1344]: I0702 09:48:08.047731 1344 main.cc:92] Flatcar Update Engine starting Jul 2 09:48:08.070572 update_engine[1344]: I0702 09:48:08.051098 1344 update_check_scheduler.cc:74] Next update check in 11m4s Jul 2 09:48:07.987974 systemd[1]: Starting tcsd.service... Jul 2 09:48:08.070751 jq[1345]: true Jul 2 09:48:08.001365 systemd-logind[1342]: Watching system buttons on /dev/input/event3 (Power Button) Jul 2 09:48:08.001375 systemd-logind[1342]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 09:48:08.001385 systemd-logind[1342]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 2 09:48:08.001530 systemd-logind[1342]: New seat seat0. Jul 2 09:48:08.006021 systemd[1]: Starting update-engine.service... Jul 2 09:48:08.023989 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 09:48:08.043356 systemd[1]: Started dbus.service. Jul 2 09:48:08.064144 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:48:08.064272 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 09:48:08.064495 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:48:08.064605 systemd[1]: Finished motdgen.service. Jul 2 09:48:08.077667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:48:08.077780 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 09:48:08.088967 jq[1349]: false Jul 2 09:48:08.089931 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Jul 2 09:48:08.090050 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Jul 2 09:48:08.091587 dbus-daemon[1309]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 09:48:08.093744 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 2 09:48:08.093881 systemd[1]: Condition check resulted in tcsd.service being skipped. Jul 2 09:48:08.096728 systemd[1]: Finished ensure-sysext.service. Jul 2 09:48:08.098126 env[1350]: time="2024-07-02T09:48:08.098104512Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 09:48:08.106759 env[1350]: time="2024-07-02T09:48:08.106741711Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 09:48:08.106833 env[1350]: time="2024-07-02T09:48:08.106822232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:48:08.107528 env[1350]: time="2024-07-02T09:48:08.107510172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:48:08.107569 env[1350]: time="2024-07-02T09:48:08.107528579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:48:08.107683 env[1350]: time="2024-07-02T09:48:08.107670918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:48:08.107720 env[1350]: time="2024-07-02T09:48:08.107683494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 09:48:08.107720 env[1350]: time="2024-07-02T09:48:08.107695934Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 09:48:08.107720 env[1350]: time="2024-07-02T09:48:08.107706227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 09:48:08.107807 env[1350]: time="2024-07-02T09:48:08.107764750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:48:08.107927 env[1350]: time="2024-07-02T09:48:08.107916888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:48:08.108027 env[1350]: time="2024-07-02T09:48:08.108015935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:48:08.108060 env[1350]: time="2024-07-02T09:48:08.108027714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 09:48:08.108094 env[1350]: time="2024-07-02T09:48:08.108065671Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 09:48:08.108094 env[1350]: time="2024-07-02T09:48:08.108076433Z" level=info msg="metadata content store policy set" policy=shared Jul 2 09:48:08.111083 systemd[1]: Started update-engine.service. Jul 2 09:48:08.119552 env[1350]: time="2024-07-02T09:48:08.119537548Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 09:48:08.119670 env[1350]: time="2024-07-02T09:48:08.119659531Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 09:48:08.119712 env[1350]: time="2024-07-02T09:48:08.119672986Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 09:48:08.119712 env[1350]: time="2024-07-02T09:48:08.119698223Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119767 env[1350]: time="2024-07-02T09:48:08.119712936Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119767 env[1350]: time="2024-07-02T09:48:08.119726902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119767 env[1350]: time="2024-07-02T09:48:08.119739867Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119767 env[1350]: time="2024-07-02T09:48:08.119752409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119767 env[1350]: time="2024-07-02T09:48:08.119765006Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119898 env[1350]: time="2024-07-02T09:48:08.119776645Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119898 env[1350]: time="2024-07-02T09:48:08.119789057Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.119898 env[1350]: time="2024-07-02T09:48:08.119800232Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 09:48:08.119898 env[1350]: time="2024-07-02T09:48:08.119865473Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 09:48:08.120002 env[1350]: time="2024-07-02T09:48:08.119928452Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 09:48:08.120506 env[1350]: time="2024-07-02T09:48:08.120402993Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 09:48:08.120506 env[1350]: time="2024-07-02T09:48:08.120492988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120506 env[1350]: time="2024-07-02T09:48:08.120503872Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 09:48:08.120568 env[1350]: time="2024-07-02T09:48:08.120530948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120568 env[1350]: time="2024-07-02T09:48:08.120539020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120568 env[1350]: time="2024-07-02T09:48:08.120546023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120568 env[1350]: time="2024-07-02T09:48:08.120552169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120568 env[1350]: time="2024-07-02T09:48:08.120558473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120568 env[1350]: time="2024-07-02T09:48:08.120564822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120658 env[1350]: time="2024-07-02T09:48:08.120571327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120658 env[1350]: time="2024-07-02T09:48:08.120577338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120658 env[1350]: time="2024-07-02T09:48:08.120585206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 09:48:08.120658 env[1350]: time="2024-07-02T09:48:08.120652556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120724 env[1350]: time="2024-07-02T09:48:08.120661924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120724 env[1350]: time="2024-07-02T09:48:08.120668981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120724 env[1350]: time="2024-07-02T09:48:08.120675319Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 09:48:08.120724 env[1350]: time="2024-07-02T09:48:08.120683894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 09:48:08.120724 env[1350]: time="2024-07-02T09:48:08.120693772Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 09:48:08.120724 env[1350]: time="2024-07-02T09:48:08.120703457Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 09:48:08.120814 env[1350]: time="2024-07-02T09:48:08.120723871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 09:48:08.120886 env[1350]: time="2024-07-02T09:48:08.120827771Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 09:48:08.120886 env[1350]: time="2024-07-02T09:48:08.120858210Z" level=info msg="Connect containerd service" Jul 2 09:48:08.120886 env[1350]: time="2024-07-02T09:48:08.120874708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121132154Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121225232Z" level=info msg="Start subscribing containerd event" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121248975Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121259421Z" level=info msg="Start recovering state" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121272010Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121293986Z" level=info msg="containerd successfully booted in 0.023546s" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121297906Z" level=info msg="Start event monitor" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121315024Z" level=info msg="Start snapshots syncer" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121324816Z" level=info msg="Start cni network conf syncer for default" Jul 2 09:48:08.123091 env[1350]: time="2024-07-02T09:48:08.121331515Z" level=info msg="Start streaming server" Jul 2 09:48:08.121135 systemd[1]: Started systemd-logind.service. Jul 2 09:48:08.129324 systemd[1]: Started containerd.service. Jul 2 09:48:08.136807 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:48:08.136872 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 09:48:08.137702 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 09:48:08.139376 jq[1378]: false Jul 2 09:48:08.144279 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 09:48:08.145154 systemd[1]: Started locksmithd.service. Jul 2 09:48:08.152962 systemd[1]: Starting modprobe@drm.service... Jul 2 09:48:08.159913 systemd[1]: Starting motdgen.service... Jul 2 09:48:08.167012 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 09:48:08.173278 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 09:48:08.173379 systemd[1]: Reached target system-config.target. Jul 2 09:48:08.181969 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 09:48:08.190926 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 09:48:08.192489 jq[1398]: true Jul 2 09:48:08.199292 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 09:48:08.199387 systemd[1]: Reached target user-config.target. Jul 2 09:48:08.206989 locksmithd[1380]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 09:48:08.207294 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 09:48:08.209391 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:48:08.209511 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 09:48:08.209725 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:48:08.209802 systemd[1]: Finished modprobe@drm.service. Jul 2 09:48:08.218576 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:48:08.218685 systemd[1]: Finished motdgen.service. Jul 2 09:48:08.225455 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:48:08.225561 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 09:48:08.235572 jq[1402]: false Jul 2 09:48:08.235745 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Jul 2 09:48:08.235857 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Jul 2 09:48:08.320349 systemd-networkd[1101]: bond0: Gained IPv6LL Jul 2 09:48:08.320604 systemd-timesyncd[1288]: Network configuration changed, trying to establish connection. Jul 2 09:48:08.366890 sshd_keygen[1341]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 09:48:08.378997 systemd[1]: Finished sshd-keygen.service. Jul 2 09:48:08.387365 systemd[1]: Starting issuegen.service... Jul 2 09:48:08.394572 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 09:48:08.394682 systemd[1]: Finished issuegen.service. Jul 2 09:48:08.411201 systemd[1]: Starting systemd-user-sessions.service... Jul 2 09:48:08.440005 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jul 2 09:48:08.424174 systemd[1]: Finished systemd-user-sessions.service. Jul 2 09:48:08.440119 extend-filesystems[1330]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jul 2 09:48:08.440119 extend-filesystems[1330]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 2 09:48:08.440119 extend-filesystems[1330]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jul 2 09:48:08.495468 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Jul 2 09:48:08.433088 systemd[1]: Started getty@tty1.service. Jul 2 09:48:08.495735 extend-filesystems[1314]: Resized filesystem in /dev/sdb9 Jul 2 09:48:08.441014 systemd[1]: Started serial-getty@ttyS1.service. Jul 2 09:48:08.468685 systemd[1]: Reached target getty.target. Jul 2 09:48:08.479086 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 09:48:08.479231 systemd[1]: Finished extend-filesystems.service. Jul 2 09:48:08.543240 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Jul 2 09:48:08.543341 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:1 Jul 2 09:48:08.640469 systemd-timesyncd[1288]: Network configuration changed, trying to establish connection. Jul 2 09:48:08.640606 systemd-timesyncd[1288]: Network configuration changed, trying to establish connection. Jul 2 09:48:08.641371 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 09:48:08.651558 systemd[1]: Reached target network-online.target. Jul 2 09:48:08.661490 systemd[1]: Starting kubelet.service... Jul 2 09:48:09.411401 systemd[1]: Started kubelet.service. Jul 2 09:48:09.535315 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Jul 2 09:48:10.097135 kubelet[1440]: E0702 09:48:10.097063 1440 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:48:10.098455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:48:10.098556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:48:13.473937 login[1426]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jul 2 09:48:13.474402 login[1423]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 09:48:13.481210 systemd[1]: Created slice user-500.slice. Jul 2 09:48:13.481746 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 09:48:13.482790 systemd-logind[1342]: New session 2 of user core. Jul 2 09:48:13.487556 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 09:48:13.488357 systemd[1]: Starting user@500.service... Jul 2 09:48:13.490502 (systemd)[1460]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:48:13.558491 systemd[1460]: Queued start job for default target default.target. Jul 2 09:48:13.558589 systemd[1460]: Reached target paths.target. Jul 2 09:48:13.558600 systemd[1460]: Reached target sockets.target. Jul 2 09:48:13.558607 systemd[1460]: Reached target timers.target. Jul 2 09:48:13.558613 systemd[1460]: Reached target basic.target. Jul 2 09:48:13.558632 systemd[1460]: Reached target default.target. Jul 2 09:48:13.558645 systemd[1460]: Startup finished in 65ms. Jul 2 09:48:13.558707 systemd[1]: Started user@500.service. Jul 2 09:48:13.559231 systemd[1]: Started session-2.scope. Jul 2 09:48:14.081269 coreos-metadata[1306]: Jul 02 09:48:14.081 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 09:48:14.082131 coreos-metadata[1305]: Jul 02 09:48:14.081 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 09:48:14.478941 login[1426]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 09:48:14.481708 systemd-logind[1342]: New session 1 of user core. Jul 2 09:48:14.482243 systemd[1]: Started session-1.scope. Jul 2 09:48:14.630172 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Jul 2 09:48:14.630334 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Jul 2 09:48:15.081473 coreos-metadata[1306]: Jul 02 09:48:15.081 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 09:48:15.082370 coreos-metadata[1305]: Jul 02 09:48:15.081 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 09:48:15.113460 coreos-metadata[1305]: Jul 02 09:48:15.113 INFO Fetch successful Jul 2 09:48:15.118910 coreos-metadata[1306]: Jul 02 09:48:15.118 INFO Fetch successful Jul 2 09:48:15.139529 unknown[1305]: wrote ssh authorized keys file for user: core Jul 2 09:48:15.143927 systemd[1]: Finished coreos-metadata.service. Jul 2 09:48:15.144912 systemd[1]: Started packet-phone-home.service. Jul 2 09:48:15.155635 curl[1489]: % Total % Received % Xferd Average Speed Time Time Time Current Jul 2 09:48:15.156384 curl[1489]: Dload Upload Total Spent Left Speed Jul 2 09:48:15.162269 update-ssh-keys[1485]: Updated "/home/core/.ssh/authorized_keys" Jul 2 09:48:15.162566 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 09:48:15.162757 systemd[1]: Reached target multi-user.target. Jul 2 09:48:15.163554 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 09:48:15.167567 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 09:48:15.167704 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 09:48:15.167889 systemd[1]: Startup finished in 8.768s (kernel) + 14.426s (userspace) = 23.195s. Jul 2 09:48:15.339929 curl[1489]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Jul 2 09:48:15.342309 systemd[1]: packet-phone-home.service: Deactivated successfully. Jul 2 09:48:15.455303 systemd[1]: Created slice system-sshd.slice. Jul 2 09:48:15.458531 systemd[1]: Started sshd@0-147.75.203.53:22-139.178.68.195:53006.service. Jul 2 09:48:15.514233 sshd[1495]: Accepted publickey for core from 139.178.68.195 port 53006 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 09:48:15.515162 sshd[1495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:48:15.518491 systemd-logind[1342]: New session 3 of user core. Jul 2 09:48:15.519324 systemd[1]: Started session-3.scope. Jul 2 09:48:15.574976 systemd[1]: Started sshd@1-147.75.203.53:22-139.178.68.195:53010.service. Jul 2 09:48:15.610544 sshd[1500]: Accepted publickey for core from 139.178.68.195 port 53010 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 09:48:15.611198 sshd[1500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:48:15.613562 systemd-logind[1342]: New session 4 of user core. Jul 2 09:48:15.614066 systemd[1]: Started session-4.scope. Jul 2 09:48:15.665952 sshd[1500]: pam_unix(sshd:session): session closed for user core Jul 2 09:48:15.668888 systemd[1]: Started sshd@2-147.75.203.53:22-139.178.68.195:53026.service. Jul 2 09:48:15.669629 systemd[1]: sshd@1-147.75.203.53:22-139.178.68.195:53010.service: Deactivated successfully. Jul 2 09:48:15.670717 systemd-logind[1342]: Session 4 logged out. Waiting for processes to exit. Jul 2 09:48:15.670734 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 09:48:15.671907 systemd-logind[1342]: Removed session 4. Jul 2 09:48:15.717426 sshd[1506]: Accepted publickey for core from 139.178.68.195 port 53026 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 09:48:15.719948 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:48:15.729506 systemd-logind[1342]: New session 5 of user core. Jul 2 09:48:15.732575 systemd[1]: Started session-5.scope. Jul 2 09:48:15.803429 sshd[1506]: pam_unix(sshd:session): session closed for user core Jul 2 09:48:15.809474 systemd[1]: Started sshd@3-147.75.203.53:22-139.178.68.195:53028.service. Jul 2 09:48:15.811111 systemd[1]: sshd@2-147.75.203.53:22-139.178.68.195:53026.service: Deactivated successfully. Jul 2 09:48:15.813644 systemd-logind[1342]: Session 5 logged out. Waiting for processes to exit. Jul 2 09:48:15.813685 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 09:48:15.814997 systemd-logind[1342]: Removed session 5. Jul 2 09:48:15.845640 sshd[1512]: Accepted publickey for core from 139.178.68.195 port 53028 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 09:48:15.846367 sshd[1512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:48:15.848952 systemd-logind[1342]: New session 6 of user core. Jul 2 09:48:15.849455 systemd[1]: Started session-6.scope. Jul 2 09:48:15.915393 sshd[1512]: pam_unix(sshd:session): session closed for user core Jul 2 09:48:15.921618 systemd[1]: Started sshd@4-147.75.203.53:22-139.178.68.195:53034.service. Jul 2 09:48:15.923248 systemd[1]: sshd@3-147.75.203.53:22-139.178.68.195:53028.service: Deactivated successfully. Jul 2 09:48:15.925745 systemd-logind[1342]: Session 6 logged out. Waiting for processes to exit. Jul 2 09:48:15.925800 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 09:48:15.928267 systemd-logind[1342]: Removed session 6. Jul 2 09:48:15.985663 sshd[1520]: Accepted publickey for core from 139.178.68.195 port 53034 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 09:48:15.987956 sshd[1520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:48:15.995741 systemd-logind[1342]: New session 7 of user core. Jul 2 09:48:15.997339 systemd[1]: Started session-7.scope. Jul 2 09:48:16.092019 sudo[1525]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 09:48:16.092727 sudo[1525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:48:17.044860 systemd[1]: Stopped kubelet.service. Jul 2 09:48:17.046195 systemd[1]: Starting kubelet.service... Jul 2 09:48:17.058821 systemd[1]: Reloading. Jul 2 09:48:17.088995 /usr/lib/systemd/system-generators/torcx-generator[1613]: time="2024-07-02T09:48:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 09:48:17.089028 /usr/lib/systemd/system-generators/torcx-generator[1613]: time="2024-07-02T09:48:17Z" level=info msg="torcx already run" Jul 2 09:48:17.145360 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 09:48:17.145368 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 09:48:17.157529 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:48:17.223056 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 09:48:17.223116 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 09:48:17.223311 systemd[1]: Stopped kubelet.service. Jul 2 09:48:17.224387 systemd[1]: Starting kubelet.service... Jul 2 09:48:17.431899 systemd[1]: Started kubelet.service. Jul 2 09:48:17.461362 kubelet[1685]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:48:17.461362 kubelet[1685]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:48:17.461362 kubelet[1685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:48:17.461589 kubelet[1685]: I0702 09:48:17.461357 1685 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:48:17.608531 kubelet[1685]: I0702 09:48:17.608489 1685 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 09:48:17.608531 kubelet[1685]: I0702 09:48:17.608502 1685 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:48:17.608644 kubelet[1685]: I0702 09:48:17.608603 1685 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 09:48:17.618979 kubelet[1685]: I0702 09:48:17.618942 1685 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:48:17.645155 kubelet[1685]: I0702 09:48:17.645139 1685 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:48:17.646995 kubelet[1685]: I0702 09:48:17.646960 1685 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:48:17.647084 kubelet[1685]: I0702 09:48:17.647053 1685 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:48:17.647462 kubelet[1685]: I0702 09:48:17.647427 1685 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:48:17.647462 kubelet[1685]: I0702 09:48:17.647435 1685 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:48:17.649027 kubelet[1685]: I0702 09:48:17.648993 1685 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:48:17.650935 kubelet[1685]: I0702 09:48:17.650891 1685 kubelet.go:393] "Attempting to sync node with API server" Jul 2 09:48:17.650935 kubelet[1685]: I0702 09:48:17.650901 1685 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:48:17.650987 kubelet[1685]: I0702 09:48:17.650940 1685 kubelet.go:309] "Adding apiserver pod source" Jul 2 09:48:17.650987 kubelet[1685]: I0702 09:48:17.650946 1685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:48:17.651071 kubelet[1685]: E0702 09:48:17.651042 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:17.651071 kubelet[1685]: E0702 09:48:17.651071 1685 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:17.652029 kubelet[1685]: I0702 09:48:17.651978 1685 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 09:48:17.653088 kubelet[1685]: W0702 09:48:17.653082 1685 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 09:48:17.653462 kubelet[1685]: I0702 09:48:17.653453 1685 server.go:1232] "Started kubelet" Jul 2 09:48:17.653569 kubelet[1685]: I0702 09:48:17.653560 1685 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:48:17.653608 kubelet[1685]: I0702 09:48:17.653597 1685 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 09:48:17.653784 kubelet[1685]: I0702 09:48:17.653774 1685 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:48:17.654175 kubelet[1685]: I0702 09:48:17.654169 1685 server.go:462] "Adding debug handlers to kubelet server" Jul 2 09:48:17.663452 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 09:48:17.663596 kubelet[1685]: I0702 09:48:17.663564 1685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:48:17.663650 kubelet[1685]: I0702 09:48:17.663642 1685 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:48:17.663693 kubelet[1685]: I0702 09:48:17.663687 1685 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:48:17.663767 kubelet[1685]: E0702 09:48:17.663754 1685 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 09:48:17.663767 kubelet[1685]: I0702 09:48:17.663759 1685 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:48:17.663820 kubelet[1685]: E0702 09:48:17.663773 1685 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:48:17.667208 kubelet[1685]: E0702 09:48:17.667199 1685 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.19\" not found" node="10.67.80.19" Jul 2 09:48:17.691416 kubelet[1685]: I0702 09:48:17.691372 1685 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:48:17.691416 kubelet[1685]: I0702 09:48:17.691386 1685 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:48:17.691416 kubelet[1685]: I0702 09:48:17.691395 1685 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:48:17.692178 kubelet[1685]: I0702 09:48:17.692146 1685 policy_none.go:49] "None policy: Start" Jul 2 09:48:17.692414 kubelet[1685]: I0702 09:48:17.692374 1685 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 09:48:17.692414 kubelet[1685]: I0702 09:48:17.692387 1685 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:48:17.694751 kubelet[1685]: I0702 09:48:17.694743 1685 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:48:17.694847 kubelet[1685]: I0702 09:48:17.694841 1685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:48:17.695024 kubelet[1685]: E0702 09:48:17.695018 1685 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.19\" not found" Jul 2 09:48:17.764669 kubelet[1685]: I0702 09:48:17.764650 1685 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.19" Jul 2 09:48:17.770144 kubelet[1685]: I0702 09:48:17.770100 1685 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.19" Jul 2 09:48:17.781538 kubelet[1685]: I0702 09:48:17.781521 1685 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 09:48:17.781810 env[1350]: time="2024-07-02T09:48:17.781750937Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 09:48:17.782067 kubelet[1685]: I0702 09:48:17.781880 1685 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 09:48:17.786809 kubelet[1685]: I0702 09:48:17.786776 1685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:48:17.787515 kubelet[1685]: I0702 09:48:17.787503 1685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:48:17.787555 kubelet[1685]: I0702 09:48:17.787520 1685 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:48:17.787555 kubelet[1685]: I0702 09:48:17.787533 1685 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 09:48:17.787604 kubelet[1685]: E0702 09:48:17.787570 1685 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 09:48:18.610385 kubelet[1685]: I0702 09:48:18.610251 1685 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 09:48:18.611332 kubelet[1685]: W0702 09:48:18.610610 1685 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Jul 2 09:48:18.611332 kubelet[1685]: W0702 09:48:18.610660 1685 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Jul 2 09:48:18.611332 kubelet[1685]: W0702 09:48:18.610661 1685 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Jul 2 09:48:18.652114 kubelet[1685]: I0702 09:48:18.652006 1685 apiserver.go:52] "Watching apiserver" Jul 2 09:48:18.652114 kubelet[1685]: E0702 09:48:18.652084 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:18.659923 kubelet[1685]: I0702 09:48:18.659894 1685 topology_manager.go:215] "Topology Admit Handler" podUID="b8ab07ee-1499-43c3-a5fa-0c0dffd09839" podNamespace="kube-system" podName="kube-proxy-q4n8m" Jul 2 09:48:18.659983 kubelet[1685]: I0702 09:48:18.659967 1685 topology_manager.go:215] "Topology Admit Handler" podUID="50759504-e25c-4320-b5cd-8db8960523ae" podNamespace="kube-system" podName="cilium-5mqxs" Jul 2 09:48:18.664146 kubelet[1685]: I0702 09:48:18.664137 1685 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:48:18.669386 kubelet[1685]: I0702 09:48:18.669342 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-etc-cni-netd\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669386 kubelet[1685]: I0702 09:48:18.669361 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-lib-modules\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669386 kubelet[1685]: I0702 09:48:18.669375 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-kernel\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669386 kubelet[1685]: I0702 09:48:18.669388 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-hostproc\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669478 kubelet[1685]: I0702 09:48:18.669400 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50759504-e25c-4320-b5cd-8db8960523ae-cilium-config-path\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669478 kubelet[1685]: I0702 09:48:18.669432 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-bpf-maps\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669478 kubelet[1685]: I0702 09:48:18.669464 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cni-path\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669538 kubelet[1685]: I0702 09:48:18.669483 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50759504-e25c-4320-b5cd-8db8960523ae-clustermesh-secrets\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669538 kubelet[1685]: I0702 09:48:18.669502 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-net\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669538 kubelet[1685]: I0702 09:48:18.669517 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-hubble-tls\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669538 kubelet[1685]: I0702 09:48:18.669531 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8ab07ee-1499-43c3-a5fa-0c0dffd09839-kube-proxy\") pod \"kube-proxy-q4n8m\" (UID: \"b8ab07ee-1499-43c3-a5fa-0c0dffd09839\") " pod="kube-system/kube-proxy-q4n8m" Jul 2 09:48:18.669606 kubelet[1685]: I0702 09:48:18.669543 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8ab07ee-1499-43c3-a5fa-0c0dffd09839-xtables-lock\") pod \"kube-proxy-q4n8m\" (UID: \"b8ab07ee-1499-43c3-a5fa-0c0dffd09839\") " pod="kube-system/kube-proxy-q4n8m" Jul 2 09:48:18.669606 kubelet[1685]: I0702 09:48:18.669555 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlklc\" (UniqueName: \"kubernetes.io/projected/b8ab07ee-1499-43c3-a5fa-0c0dffd09839-kube-api-access-xlklc\") pod \"kube-proxy-q4n8m\" (UID: \"b8ab07ee-1499-43c3-a5fa-0c0dffd09839\") " pod="kube-system/kube-proxy-q4n8m" Jul 2 09:48:18.669606 kubelet[1685]: I0702 09:48:18.669567 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6ftn\" (UniqueName: \"kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-kube-api-access-s6ftn\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669606 kubelet[1685]: I0702 09:48:18.669579 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-xtables-lock\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669606 kubelet[1685]: I0702 09:48:18.669602 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8ab07ee-1499-43c3-a5fa-0c0dffd09839-lib-modules\") pod \"kube-proxy-q4n8m\" (UID: \"b8ab07ee-1499-43c3-a5fa-0c0dffd09839\") " pod="kube-system/kube-proxy-q4n8m" Jul 2 09:48:18.669692 kubelet[1685]: I0702 09:48:18.669620 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-run\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.669692 kubelet[1685]: I0702 09:48:18.669632 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-cgroup\") pod \"cilium-5mqxs\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " pod="kube-system/cilium-5mqxs" Jul 2 09:48:18.963350 env[1350]: time="2024-07-02T09:48:18.963123986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mqxs,Uid:50759504-e25c-4320-b5cd-8db8960523ae,Namespace:kube-system,Attempt:0,}" Jul 2 09:48:18.964301 env[1350]: time="2024-07-02T09:48:18.963421605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q4n8m,Uid:b8ab07ee-1499-43c3-a5fa-0c0dffd09839,Namespace:kube-system,Attempt:0,}" Jul 2 09:48:18.972562 sudo[1525]: pam_unix(sudo:session): session closed for user root Jul 2 09:48:18.977591 sshd[1520]: pam_unix(sshd:session): session closed for user core Jul 2 09:48:18.983466 systemd[1]: sshd@4-147.75.203.53:22-139.178.68.195:53034.service: Deactivated successfully. Jul 2 09:48:18.986408 systemd-logind[1342]: Session 7 logged out. Waiting for processes to exit. Jul 2 09:48:18.986565 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 09:48:18.989019 systemd-logind[1342]: Removed session 7. Jul 2 09:48:19.652828 kubelet[1685]: E0702 09:48:19.652721 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:19.659071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893210809.mount: Deactivated successfully. Jul 2 09:48:19.679508 env[1350]: time="2024-07-02T09:48:19.679412322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.682001 env[1350]: time="2024-07-02T09:48:19.681885004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.686509 env[1350]: time="2024-07-02T09:48:19.686406431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.693225 env[1350]: time="2024-07-02T09:48:19.693128935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.700450 env[1350]: time="2024-07-02T09:48:19.700316375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.706645 env[1350]: time="2024-07-02T09:48:19.706528942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.709118 env[1350]: time="2024-07-02T09:48:19.709018289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.711722 env[1350]: time="2024-07-02T09:48:19.711624555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:19.723277 env[1350]: time="2024-07-02T09:48:19.723153614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:48:19.723277 env[1350]: time="2024-07-02T09:48:19.723232397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:48:19.723523 env[1350]: time="2024-07-02T09:48:19.723308189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:48:19.723702 env[1350]: time="2024-07-02T09:48:19.723580449Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec841f5796023cdf37e686c0b17f892bf3b67534d5f583bd3794d32bc02e71f2 pid=1754 runtime=io.containerd.runc.v2 Jul 2 09:48:19.724971 env[1350]: time="2024-07-02T09:48:19.724839457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:48:19.724971 env[1350]: time="2024-07-02T09:48:19.724908033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:48:19.724971 env[1350]: time="2024-07-02T09:48:19.724933171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:48:19.725308 env[1350]: time="2024-07-02T09:48:19.725206173Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d pid=1762 runtime=io.containerd.runc.v2 Jul 2 09:48:19.767704 env[1350]: time="2024-07-02T09:48:19.767667351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mqxs,Uid:50759504-e25c-4320-b5cd-8db8960523ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\"" Jul 2 09:48:19.767704 env[1350]: time="2024-07-02T09:48:19.767665932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q4n8m,Uid:b8ab07ee-1499-43c3-a5fa-0c0dffd09839,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec841f5796023cdf37e686c0b17f892bf3b67534d5f583bd3794d32bc02e71f2\"" Jul 2 09:48:19.770124 env[1350]: time="2024-07-02T09:48:19.770098652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 09:48:20.653021 kubelet[1685]: E0702 09:48:20.652976 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:20.786649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188167660.mount: Deactivated successfully. Jul 2 09:48:21.126988 env[1350]: time="2024-07-02T09:48:21.126942897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:21.127545 env[1350]: time="2024-07-02T09:48:21.127505553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:21.128252 env[1350]: time="2024-07-02T09:48:21.128211791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:21.128951 env[1350]: time="2024-07-02T09:48:21.128914523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:21.129267 env[1350]: time="2024-07-02T09:48:21.129231315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 09:48:21.129951 env[1350]: time="2024-07-02T09:48:21.129877977Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 09:48:21.130749 env[1350]: time="2024-07-02T09:48:21.130736211Z" level=info msg="CreateContainer within sandbox \"ec841f5796023cdf37e686c0b17f892bf3b67534d5f583bd3794d32bc02e71f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 09:48:21.137558 env[1350]: time="2024-07-02T09:48:21.137523219Z" level=info msg="CreateContainer within sandbox \"ec841f5796023cdf37e686c0b17f892bf3b67534d5f583bd3794d32bc02e71f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84601c8aa3a77af99bfd46a615f43a2233871f226ae2ce8bf5b9d00a1dcda7be\"" Jul 2 09:48:21.137924 env[1350]: time="2024-07-02T09:48:21.137863413Z" level=info msg="StartContainer for \"84601c8aa3a77af99bfd46a615f43a2233871f226ae2ce8bf5b9d00a1dcda7be\"" Jul 2 09:48:21.161745 env[1350]: time="2024-07-02T09:48:21.161690876Z" level=info msg="StartContainer for \"84601c8aa3a77af99bfd46a615f43a2233871f226ae2ce8bf5b9d00a1dcda7be\" returns successfully" Jul 2 09:48:21.653838 kubelet[1685]: E0702 09:48:21.653720 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:21.805799 kubelet[1685]: I0702 09:48:21.805745 1685 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q4n8m" podStartSLOduration=2.444916044 podCreationTimestamp="2024-07-02 09:48:18 +0000 UTC" firstStartedPulling="2024-07-02 09:48:19.76886095 +0000 UTC m=+2.331224681" lastFinishedPulling="2024-07-02 09:48:21.129668145 +0000 UTC m=+3.692031872" observedRunningTime="2024-07-02 09:48:21.805712059 +0000 UTC m=+4.368075786" watchObservedRunningTime="2024-07-02 09:48:21.805723235 +0000 UTC m=+4.368086961" Jul 2 09:48:22.654515 kubelet[1685]: E0702 09:48:22.654453 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:23.655314 kubelet[1685]: E0702 09:48:23.655299 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:24.655776 kubelet[1685]: E0702 09:48:24.655730 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:24.737044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530373742.mount: Deactivated successfully. Jul 2 09:48:25.656414 kubelet[1685]: E0702 09:48:25.656399 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:26.446969 env[1350]: time="2024-07-02T09:48:26.446915529Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:26.447538 env[1350]: time="2024-07-02T09:48:26.447482390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:26.448407 env[1350]: time="2024-07-02T09:48:26.448365432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:26.448776 env[1350]: time="2024-07-02T09:48:26.448735506Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 09:48:26.449820 env[1350]: time="2024-07-02T09:48:26.449786413Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:48:26.454003 env[1350]: time="2024-07-02T09:48:26.453986548Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\"" Jul 2 09:48:26.454189 env[1350]: time="2024-07-02T09:48:26.454178617Z" level=info msg="StartContainer for \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\"" Jul 2 09:48:26.474698 env[1350]: time="2024-07-02T09:48:26.474674520Z" level=info msg="StartContainer for \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\" returns successfully" Jul 2 09:48:26.657434 kubelet[1685]: E0702 09:48:26.657340 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:27.457333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4-rootfs.mount: Deactivated successfully. Jul 2 09:48:27.657632 kubelet[1685]: E0702 09:48:27.657504 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:27.702876 env[1350]: time="2024-07-02T09:48:27.702801090Z" level=error msg="collecting metrics for 7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4" error="cgroups: cgroup deleted: unknown" Jul 2 09:48:27.827517 env[1350]: time="2024-07-02T09:48:27.827259175Z" level=info msg="shim disconnected" id=7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4 Jul 2 09:48:27.827517 env[1350]: time="2024-07-02T09:48:27.827372082Z" level=warning msg="cleaning up after shim disconnected" id=7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4 namespace=k8s.io Jul 2 09:48:27.827517 env[1350]: time="2024-07-02T09:48:27.827400568Z" level=info msg="cleaning up dead shim" Jul 2 09:48:27.840014 env[1350]: time="2024-07-02T09:48:27.839972971Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:48:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2046 runtime=io.containerd.runc.v2\n" Jul 2 09:48:28.658563 kubelet[1685]: E0702 09:48:28.658447 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:28.815193 env[1350]: time="2024-07-02T09:48:28.815055948Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:48:28.828888 env[1350]: time="2024-07-02T09:48:28.828804872Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\"" Jul 2 09:48:28.829176 env[1350]: time="2024-07-02T09:48:28.829141007Z" level=info msg="StartContainer for \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\"" Jul 2 09:48:28.851428 env[1350]: time="2024-07-02T09:48:28.851403263Z" level=info msg="StartContainer for \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\" returns successfully" Jul 2 09:48:28.857610 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:48:28.857768 systemd[1]: Stopped systemd-sysctl.service. Jul 2 09:48:28.857911 systemd[1]: Stopping systemd-sysctl.service... Jul 2 09:48:28.858935 systemd[1]: Starting systemd-sysctl.service... Jul 2 09:48:28.862964 systemd[1]: Finished systemd-sysctl.service. Jul 2 09:48:28.867631 env[1350]: time="2024-07-02T09:48:28.867608198Z" level=info msg="shim disconnected" id=8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4 Jul 2 09:48:28.867701 env[1350]: time="2024-07-02T09:48:28.867633592Z" level=warning msg="cleaning up after shim disconnected" id=8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4 namespace=k8s.io Jul 2 09:48:28.867701 env[1350]: time="2024-07-02T09:48:28.867639180Z" level=info msg="cleaning up dead shim" Jul 2 09:48:28.871057 env[1350]: time="2024-07-02T09:48:28.871040182Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:48:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2110 runtime=io.containerd.runc.v2\n" Jul 2 09:48:29.658783 kubelet[1685]: E0702 09:48:29.658674 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:29.813329 env[1350]: time="2024-07-02T09:48:29.813301775Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:48:29.819097 env[1350]: time="2024-07-02T09:48:29.819078115Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\"" Jul 2 09:48:29.819322 env[1350]: time="2024-07-02T09:48:29.819304082Z" level=info msg="StartContainer for \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\"" Jul 2 09:48:29.826148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4-rootfs.mount: Deactivated successfully. Jul 2 09:48:29.850080 env[1350]: time="2024-07-02T09:48:29.850052918Z" level=info msg="StartContainer for \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\" returns successfully" Jul 2 09:48:29.860078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a-rootfs.mount: Deactivated successfully. Jul 2 09:48:29.861140 env[1350]: time="2024-07-02T09:48:29.861091686Z" level=info msg="shim disconnected" id=54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a Jul 2 09:48:29.861140 env[1350]: time="2024-07-02T09:48:29.861134084Z" level=warning msg="cleaning up after shim disconnected" id=54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a namespace=k8s.io Jul 2 09:48:29.861140 env[1350]: time="2024-07-02T09:48:29.861140114Z" level=info msg="cleaning up dead shim" Jul 2 09:48:29.864768 env[1350]: time="2024-07-02T09:48:29.864720374Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:48:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2167 runtime=io.containerd.runc.v2\n" Jul 2 09:48:30.659122 kubelet[1685]: E0702 09:48:30.659057 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:30.824468 env[1350]: time="2024-07-02T09:48:30.824372183Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:48:30.838949 env[1350]: time="2024-07-02T09:48:30.838880900Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\"" Jul 2 09:48:30.839125 env[1350]: time="2024-07-02T09:48:30.839109899Z" level=info msg="StartContainer for \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\"" Jul 2 09:48:30.858291 env[1350]: time="2024-07-02T09:48:30.858264405Z" level=info msg="StartContainer for \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\" returns successfully" Jul 2 09:48:30.883844 env[1350]: time="2024-07-02T09:48:30.883809710Z" level=info msg="shim disconnected" id=3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe Jul 2 09:48:30.883844 env[1350]: time="2024-07-02T09:48:30.883843569Z" level=warning msg="cleaning up after shim disconnected" id=3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe namespace=k8s.io Jul 2 09:48:30.883978 env[1350]: time="2024-07-02T09:48:30.883851600Z" level=info msg="cleaning up dead shim" Jul 2 09:48:30.888362 env[1350]: time="2024-07-02T09:48:30.888336940Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:48:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2219 runtime=io.containerd.runc.v2\n" Jul 2 09:48:31.659616 kubelet[1685]: E0702 09:48:31.659544 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:31.839008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe-rootfs.mount: Deactivated successfully. Jul 2 09:48:31.839362 env[1350]: time="2024-07-02T09:48:31.839343967Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:48:31.845005 env[1350]: time="2024-07-02T09:48:31.844991474Z" level=info msg="CreateContainer within sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\"" Jul 2 09:48:31.845280 env[1350]: time="2024-07-02T09:48:31.845267810Z" level=info msg="StartContainer for \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\"" Jul 2 09:48:31.866863 env[1350]: time="2024-07-02T09:48:31.866811229Z" level=info msg="StartContainer for \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\" returns successfully" Jul 2 09:48:31.921318 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 09:48:31.989288 kubelet[1685]: I0702 09:48:31.989269 1685 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 09:48:32.164258 kernel: Initializing XFRM netlink socket Jul 2 09:48:32.177296 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 09:48:32.659926 kubelet[1685]: E0702 09:48:32.659801 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:32.867295 kubelet[1685]: I0702 09:48:32.867272 1685 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5mqxs" podStartSLOduration=8.187196475 podCreationTimestamp="2024-07-02 09:48:18 +0000 UTC" firstStartedPulling="2024-07-02 09:48:19.768862898 +0000 UTC m=+2.331226634" lastFinishedPulling="2024-07-02 09:48:26.448913957 +0000 UTC m=+9.011277680" observedRunningTime="2024-07-02 09:48:32.866852859 +0000 UTC m=+15.429216588" watchObservedRunningTime="2024-07-02 09:48:32.867247521 +0000 UTC m=+15.429611250" Jul 2 09:48:33.661009 kubelet[1685]: E0702 09:48:33.660898 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:33.785169 systemd-timesyncd[1288]: Network configuration changed, trying to establish connection. Jul 2 09:48:33.785282 systemd-networkd[1101]: cilium_host: Link UP Jul 2 09:48:33.785368 systemd-networkd[1101]: cilium_net: Link UP Jul 2 09:48:33.785371 systemd-networkd[1101]: cilium_net: Gained carrier Jul 2 09:48:33.785472 systemd-networkd[1101]: cilium_host: Gained carrier Jul 2 09:48:33.793256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 09:48:33.793321 systemd-networkd[1101]: cilium_host: Gained IPv6LL Jul 2 09:48:33.835911 systemd-networkd[1101]: cilium_vxlan: Link UP Jul 2 09:48:33.835916 systemd-networkd[1101]: cilium_vxlan: Gained carrier Jul 2 09:48:33.980245 kernel: NET: Registered PF_ALG protocol family Jul 2 09:48:34.534442 systemd-resolved[1286]: Clock change detected. Flushing caches. Jul 2 09:48:34.534451 systemd-timesyncd[1288]: Contacted time server [2607:2c40:beef:16::3]:123 (2.flatcar.pool.ntp.org). Jul 2 09:48:34.534479 systemd-timesyncd[1288]: Initial clock synchronization to Tue 2024-07-02 09:48:34.534371 UTC. Jul 2 09:48:34.560914 systemd-networkd[1101]: cilium_net: Gained IPv6LL Jul 2 09:48:34.918331 systemd-networkd[1101]: lxc_health: Link UP Jul 2 09:48:34.943500 systemd-networkd[1101]: lxc_health: Gained carrier Jul 2 09:48:34.943794 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 09:48:35.133785 kubelet[1685]: E0702 09:48:35.133743 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:35.594517 kubelet[1685]: I0702 09:48:35.594408 1685 topology_manager.go:215] "Topology Admit Handler" podUID="89866b32-a4d9-4bb3-8003-9408f6d648a8" podNamespace="default" podName="nginx-deployment-6d5f899847-ghd8t" Jul 2 09:48:35.608891 systemd-networkd[1101]: cilium_vxlan: Gained IPv6LL Jul 2 09:48:35.655352 kubelet[1685]: I0702 09:48:35.655231 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2wf\" (UniqueName: \"kubernetes.io/projected/89866b32-a4d9-4bb3-8003-9408f6d648a8-kube-api-access-zl2wf\") pod \"nginx-deployment-6d5f899847-ghd8t\" (UID: \"89866b32-a4d9-4bb3-8003-9408f6d648a8\") " pod="default/nginx-deployment-6d5f899847-ghd8t" Jul 2 09:48:35.901004 env[1350]: time="2024-07-02T09:48:35.900750648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-ghd8t,Uid:89866b32-a4d9-4bb3-8003-9408f6d648a8,Namespace:default,Attempt:0,}" Jul 2 09:48:35.925926 systemd-networkd[1101]: lxcf32d389eea30: Link UP Jul 2 09:48:35.946808 kernel: eth0: renamed from tmpce4bc Jul 2 09:48:35.974463 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 09:48:35.974512 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf32d389eea30: link becomes ready Jul 2 09:48:35.974531 systemd-networkd[1101]: lxcf32d389eea30: Gained carrier Jul 2 09:48:36.134275 kubelet[1685]: E0702 09:48:36.134252 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:36.312956 systemd-networkd[1101]: lxc_health: Gained IPv6LL Jul 2 09:48:37.016931 systemd-networkd[1101]: lxcf32d389eea30: Gained IPv6LL Jul 2 09:48:37.134731 kubelet[1685]: E0702 09:48:37.134683 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:38.124327 kubelet[1685]: E0702 09:48:38.124279 1685 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:38.135472 kubelet[1685]: E0702 09:48:38.135460 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:38.185722 env[1350]: time="2024-07-02T09:48:38.185688369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:48:38.185722 env[1350]: time="2024-07-02T09:48:38.185709953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:48:38.185722 env[1350]: time="2024-07-02T09:48:38.185716768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:48:38.185979 env[1350]: time="2024-07-02T09:48:38.185781889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce4bc4529b8f40967370f7e443c5d4c97d7a8bfbf871ae8f8ce537c1f977bc66 pid=2859 runtime=io.containerd.runc.v2 Jul 2 09:48:38.212854 env[1350]: time="2024-07-02T09:48:38.212779377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-ghd8t,Uid:89866b32-a4d9-4bb3-8003-9408f6d648a8,Namespace:default,Attempt:0,} returns sandbox id \"ce4bc4529b8f40967370f7e443c5d4c97d7a8bfbf871ae8f8ce537c1f977bc66\"" Jul 2 09:48:38.213518 env[1350]: time="2024-07-02T09:48:38.213476119Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 09:48:39.136355 kubelet[1685]: E0702 09:48:39.136234 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:40.137405 kubelet[1685]: E0702 09:48:40.137357 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:40.263323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344680631.mount: Deactivated successfully. Jul 2 09:48:41.101278 env[1350]: time="2024-07-02T09:48:41.101251326Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:41.101801 env[1350]: time="2024-07-02T09:48:41.101784529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:41.103045 env[1350]: time="2024-07-02T09:48:41.103033060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:41.103803 env[1350]: time="2024-07-02T09:48:41.103790979Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:41.104227 env[1350]: time="2024-07-02T09:48:41.104200253Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 09:48:41.105453 env[1350]: time="2024-07-02T09:48:41.105439618Z" level=info msg="CreateContainer within sandbox \"ce4bc4529b8f40967370f7e443c5d4c97d7a8bfbf871ae8f8ce537c1f977bc66\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 09:48:41.110068 env[1350]: time="2024-07-02T09:48:41.110024743Z" level=info msg="CreateContainer within sandbox \"ce4bc4529b8f40967370f7e443c5d4c97d7a8bfbf871ae8f8ce537c1f977bc66\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a47cd809a4b61a698977f11d211bac79ca5646673520c333f80c9392e67f6a7f\"" Jul 2 09:48:41.110228 env[1350]: time="2024-07-02T09:48:41.110211840Z" level=info msg="StartContainer for \"a47cd809a4b61a698977f11d211bac79ca5646673520c333f80c9392e67f6a7f\"" Jul 2 09:48:41.131358 env[1350]: time="2024-07-02T09:48:41.131333920Z" level=info msg="StartContainer for \"a47cd809a4b61a698977f11d211bac79ca5646673520c333f80c9392e67f6a7f\" returns successfully" Jul 2 09:48:41.138168 kubelet[1685]: E0702 09:48:41.138119 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:41.342246 kubelet[1685]: I0702 09:48:41.342144 1685 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-ghd8t" podStartSLOduration=3.451054544 podCreationTimestamp="2024-07-02 09:48:35 +0000 UTC" firstStartedPulling="2024-07-02 09:48:38.213330228 +0000 UTC m=+20.303141414" lastFinishedPulling="2024-07-02 09:48:41.104337181 +0000 UTC m=+23.194148367" observedRunningTime="2024-07-02 09:48:41.341656928 +0000 UTC m=+23.431468174" watchObservedRunningTime="2024-07-02 09:48:41.342061497 +0000 UTC m=+23.431872747" Jul 2 09:48:42.139241 kubelet[1685]: E0702 09:48:42.139127 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:43.140278 kubelet[1685]: E0702 09:48:43.140154 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:44.141008 kubelet[1685]: E0702 09:48:44.140886 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:45.141236 kubelet[1685]: E0702 09:48:45.141131 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:46.141491 kubelet[1685]: E0702 09:48:46.141380 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:47.142034 kubelet[1685]: E0702 09:48:47.141984 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:47.151970 kubelet[1685]: I0702 09:48:47.151927 1685 topology_manager.go:215] "Topology Admit Handler" podUID="603c88e2-6e4b-4c03-bc93-c13e9283ac72" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 09:48:47.233894 kubelet[1685]: I0702 09:48:47.233766 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dqz\" (UniqueName: \"kubernetes.io/projected/603c88e2-6e4b-4c03-bc93-c13e9283ac72-kube-api-access-z8dqz\") pod \"nfs-server-provisioner-0\" (UID: \"603c88e2-6e4b-4c03-bc93-c13e9283ac72\") " pod="default/nfs-server-provisioner-0" Jul 2 09:48:47.233894 kubelet[1685]: I0702 09:48:47.233900 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/603c88e2-6e4b-4c03-bc93-c13e9283ac72-data\") pod \"nfs-server-provisioner-0\" (UID: \"603c88e2-6e4b-4c03-bc93-c13e9283ac72\") " pod="default/nfs-server-provisioner-0" Jul 2 09:48:47.456511 env[1350]: time="2024-07-02T09:48:47.456252559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:603c88e2-6e4b-4c03-bc93-c13e9283ac72,Namespace:default,Attempt:0,}" Jul 2 09:48:47.478312 systemd-networkd[1101]: lxc2b08a2c90cbd: Link UP Jul 2 09:48:47.506819 kernel: eth0: renamed from tmp2c8d8 Jul 2 09:48:47.540024 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 09:48:47.540219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2b08a2c90cbd: link becomes ready Jul 2 09:48:47.540250 systemd-networkd[1101]: lxc2b08a2c90cbd: Gained carrier Jul 2 09:48:47.789613 env[1350]: time="2024-07-02T09:48:47.789518853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:48:47.789613 env[1350]: time="2024-07-02T09:48:47.789539591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:48:47.789613 env[1350]: time="2024-07-02T09:48:47.789546421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:48:47.789722 env[1350]: time="2024-07-02T09:48:47.789613369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c8d8c3ff8bf2d084e399903433351db6c0cb803c75ffba3f780a7371716acf6 pid=3027 runtime=io.containerd.runc.v2 Jul 2 09:48:47.816511 env[1350]: time="2024-07-02T09:48:47.816456657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:603c88e2-6e4b-4c03-bc93-c13e9283ac72,Namespace:default,Attempt:0,} returns sandbox id \"2c8d8c3ff8bf2d084e399903433351db6c0cb803c75ffba3f780a7371716acf6\"" Jul 2 09:48:47.817229 env[1350]: time="2024-07-02T09:48:47.817187698Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 09:48:48.143098 kubelet[1685]: E0702 09:48:48.142975 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:49.144002 kubelet[1685]: E0702 09:48:49.143934 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:49.368922 systemd-networkd[1101]: lxc2b08a2c90cbd: Gained IPv6LL Jul 2 09:48:50.144047 kubelet[1685]: E0702 09:48:50.144025 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:50.447374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850489657.mount: Deactivated successfully. Jul 2 09:48:51.144938 kubelet[1685]: E0702 09:48:51.144914 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:51.604062 env[1350]: time="2024-07-02T09:48:51.604042476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:51.604607 env[1350]: time="2024-07-02T09:48:51.604555990Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:51.605477 env[1350]: time="2024-07-02T09:48:51.605410491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:51.606412 env[1350]: time="2024-07-02T09:48:51.606367452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:48:51.606906 env[1350]: time="2024-07-02T09:48:51.606843117Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 09:48:51.608157 env[1350]: time="2024-07-02T09:48:51.608115917Z" level=info msg="CreateContainer within sandbox \"2c8d8c3ff8bf2d084e399903433351db6c0cb803c75ffba3f780a7371716acf6\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 09:48:51.613658 env[1350]: time="2024-07-02T09:48:51.613636730Z" level=info msg="CreateContainer within sandbox \"2c8d8c3ff8bf2d084e399903433351db6c0cb803c75ffba3f780a7371716acf6\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"08b0c0d4d8e7426f362f6731e03ce1a3d7dffc5d7d837885b96c4736a3cee49d\"" Jul 2 09:48:51.613920 env[1350]: time="2024-07-02T09:48:51.613906493Z" level=info msg="StartContainer for \"08b0c0d4d8e7426f362f6731e03ce1a3d7dffc5d7d837885b96c4736a3cee49d\"" Jul 2 09:48:51.642839 env[1350]: time="2024-07-02T09:48:51.642760369Z" level=info msg="StartContainer for \"08b0c0d4d8e7426f362f6731e03ce1a3d7dffc5d7d837885b96c4736a3cee49d\" returns successfully" Jul 2 09:48:52.145213 kubelet[1685]: E0702 09:48:52.145108 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:52.377413 kubelet[1685]: I0702 09:48:52.377354 1685 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.587257969 podCreationTimestamp="2024-07-02 09:48:47 +0000 UTC" firstStartedPulling="2024-07-02 09:48:47.81706612 +0000 UTC m=+29.906877303" lastFinishedPulling="2024-07-02 09:48:51.607070972 +0000 UTC m=+33.696882161" observedRunningTime="2024-07-02 09:48:52.377256314 +0000 UTC m=+34.467067564" watchObservedRunningTime="2024-07-02 09:48:52.377262827 +0000 UTC m=+34.467074067" Jul 2 09:48:53.145906 kubelet[1685]: E0702 09:48:53.145773 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:53.470371 update_engine[1344]: I0702 09:48:53.470146 1344 update_attempter.cc:509] Updating boot flags... Jul 2 09:48:54.146757 kubelet[1685]: E0702 09:48:54.146635 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:55.147287 kubelet[1685]: E0702 09:48:55.147165 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:56.147779 kubelet[1685]: E0702 09:48:56.147659 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:57.148205 kubelet[1685]: E0702 09:48:57.148093 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:58.124496 kubelet[1685]: E0702 09:48:58.124373 1685 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:58.149204 kubelet[1685]: E0702 09:48:58.149084 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:48:59.149814 kubelet[1685]: E0702 09:48:59.149687 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:00.150901 kubelet[1685]: E0702 09:49:00.150777 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:01.151323 kubelet[1685]: E0702 09:49:01.151207 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:01.351008 kubelet[1685]: I0702 09:49:01.350942 1685 topology_manager.go:215] "Topology Admit Handler" podUID="b84074e0-bc0c-4bc8-9af0-c3449e0472a2" podNamespace="default" podName="test-pod-1" Jul 2 09:49:01.434358 kubelet[1685]: I0702 09:49:01.434214 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d85df7ad-cc31-4653-828a-a9b33095b9d7\" (UniqueName: \"kubernetes.io/nfs/b84074e0-bc0c-4bc8-9af0-c3449e0472a2-pvc-d85df7ad-cc31-4653-828a-a9b33095b9d7\") pod \"test-pod-1\" (UID: \"b84074e0-bc0c-4bc8-9af0-c3449e0472a2\") " pod="default/test-pod-1" Jul 2 09:49:01.434358 kubelet[1685]: I0702 09:49:01.434281 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfvgr\" (UniqueName: \"kubernetes.io/projected/b84074e0-bc0c-4bc8-9af0-c3449e0472a2-kube-api-access-zfvgr\") pod \"test-pod-1\" (UID: \"b84074e0-bc0c-4bc8-9af0-c3449e0472a2\") " pod="default/test-pod-1" Jul 2 09:49:01.564864 kernel: FS-Cache: Loaded Jul 2 09:49:01.603946 kernel: RPC: Registered named UNIX socket transport module. Jul 2 09:49:01.604023 kernel: RPC: Registered udp transport module. Jul 2 09:49:01.604041 kernel: RPC: Registered tcp transport module. Jul 2 09:49:01.608832 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 09:49:01.673870 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 09:49:01.802098 kernel: NFS: Registering the id_resolver key type Jul 2 09:49:01.802146 kernel: Key type id_resolver registered Jul 2 09:49:01.802162 kernel: Key type id_legacy registered Jul 2 09:49:02.152359 kubelet[1685]: E0702 09:49:02.152234 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:02.193165 nfsidmap[3177]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.5-a-ce93e9d8e3' Jul 2 09:49:02.207678 nfsidmap[3178]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.5-a-ce93e9d8e3' Jul 2 09:49:02.257332 env[1350]: time="2024-07-02T09:49:02.257186409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b84074e0-bc0c-4bc8-9af0-c3449e0472a2,Namespace:default,Attempt:0,}" Jul 2 09:49:02.278125 systemd-networkd[1101]: lxc609433da1bfe: Link UP Jul 2 09:49:02.300798 kernel: eth0: renamed from tmp827db Jul 2 09:49:02.325332 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 09:49:02.325385 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc609433da1bfe: link becomes ready Jul 2 09:49:02.325391 systemd-networkd[1101]: lxc609433da1bfe: Gained carrier Jul 2 09:49:02.475823 env[1350]: time="2024-07-02T09:49:02.475757794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:49:02.475823 env[1350]: time="2024-07-02T09:49:02.475780160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:49:02.475823 env[1350]: time="2024-07-02T09:49:02.475790139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:49:02.475965 env[1350]: time="2024-07-02T09:49:02.475865229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/827db4eaa59bf58d66984c53dbcc0f5016d366ba44f0cd432dccff0a03ca9de0 pid=3237 runtime=io.containerd.runc.v2 Jul 2 09:49:02.501979 env[1350]: time="2024-07-02T09:49:02.501922533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b84074e0-bc0c-4bc8-9af0-c3449e0472a2,Namespace:default,Attempt:0,} returns sandbox id \"827db4eaa59bf58d66984c53dbcc0f5016d366ba44f0cd432dccff0a03ca9de0\"" Jul 2 09:49:02.502630 env[1350]: time="2024-07-02T09:49:02.502618655Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 09:49:02.857441 env[1350]: time="2024-07-02T09:49:02.857320295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:49:02.860018 env[1350]: time="2024-07-02T09:49:02.859910682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:49:02.865126 env[1350]: time="2024-07-02T09:49:02.865020230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:49:02.870247 env[1350]: time="2024-07-02T09:49:02.870122077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:49:02.872600 env[1350]: time="2024-07-02T09:49:02.872475982Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 09:49:02.877273 env[1350]: time="2024-07-02T09:49:02.877163982Z" level=info msg="CreateContainer within sandbox \"827db4eaa59bf58d66984c53dbcc0f5016d366ba44f0cd432dccff0a03ca9de0\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 09:49:02.891885 env[1350]: time="2024-07-02T09:49:02.891846121Z" level=info msg="CreateContainer within sandbox \"827db4eaa59bf58d66984c53dbcc0f5016d366ba44f0cd432dccff0a03ca9de0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5a8a3231004aaa9256809380a4321a910809509ae1007402fca5d2163a5dbf56\"" Jul 2 09:49:02.892146 env[1350]: time="2024-07-02T09:49:02.892108881Z" level=info msg="StartContainer for \"5a8a3231004aaa9256809380a4321a910809509ae1007402fca5d2163a5dbf56\"" Jul 2 09:49:02.913392 env[1350]: time="2024-07-02T09:49:02.913367194Z" level=info msg="StartContainer for \"5a8a3231004aaa9256809380a4321a910809509ae1007402fca5d2163a5dbf56\" returns successfully" Jul 2 09:49:03.152928 kubelet[1685]: E0702 09:49:03.152674 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:03.409298 kubelet[1685]: I0702 09:49:03.409065 1685 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.03843489 podCreationTimestamp="2024-07-02 09:48:47 +0000 UTC" firstStartedPulling="2024-07-02 09:49:02.502472293 +0000 UTC m=+44.592283480" lastFinishedPulling="2024-07-02 09:49:02.87294488 +0000 UTC m=+44.962756145" observedRunningTime="2024-07-02 09:49:03.408306981 +0000 UTC m=+45.498118236" watchObservedRunningTime="2024-07-02 09:49:03.408907555 +0000 UTC m=+45.498718793" Jul 2 09:49:04.153107 systemd-networkd[1101]: lxc609433da1bfe: Gained IPv6LL Jul 2 09:49:04.153994 kubelet[1685]: E0702 09:49:04.153369 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:05.154266 kubelet[1685]: E0702 09:49:05.154134 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:06.154700 kubelet[1685]: E0702 09:49:06.154580 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:07.155540 kubelet[1685]: E0702 09:49:07.155422 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:08.156226 kubelet[1685]: E0702 09:49:08.156108 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:09.157292 kubelet[1685]: E0702 09:49:09.157170 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:09.540332 env[1350]: time="2024-07-02T09:49:09.540195144Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:49:09.543328 env[1350]: time="2024-07-02T09:49:09.543306486Z" level=info msg="StopContainer for \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\" with timeout 2 (s)" Jul 2 09:49:09.543450 env[1350]: time="2024-07-02T09:49:09.543410265Z" level=info msg="Stop container \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\" with signal terminated" Jul 2 09:49:09.546094 systemd-networkd[1101]: lxc_health: Link DOWN Jul 2 09:49:09.546097 systemd-networkd[1101]: lxc_health: Lost carrier Jul 2 09:49:09.612824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd-rootfs.mount: Deactivated successfully. Jul 2 09:49:10.158230 kubelet[1685]: E0702 09:49:10.158108 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:10.781529 env[1350]: time="2024-07-02T09:49:10.781396494Z" level=info msg="shim disconnected" id=b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd Jul 2 09:49:10.782706 env[1350]: time="2024-07-02T09:49:10.781534116Z" level=warning msg="cleaning up after shim disconnected" id=b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd namespace=k8s.io Jul 2 09:49:10.782706 env[1350]: time="2024-07-02T09:49:10.781579748Z" level=info msg="cleaning up dead shim" Jul 2 09:49:10.789963 env[1350]: time="2024-07-02T09:49:10.789946501Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:49:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3376 runtime=io.containerd.runc.v2\n" Jul 2 09:49:10.790734 env[1350]: time="2024-07-02T09:49:10.790719630Z" level=info msg="StopContainer for \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\" returns successfully" Jul 2 09:49:10.791177 env[1350]: time="2024-07-02T09:49:10.791132727Z" level=info msg="StopPodSandbox for \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\"" Jul 2 09:49:10.791177 env[1350]: time="2024-07-02T09:49:10.791165066Z" level=info msg="Container to stop \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:49:10.791177 env[1350]: time="2024-07-02T09:49:10.791173547Z" level=info msg="Container to stop \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:49:10.791263 env[1350]: time="2024-07-02T09:49:10.791179807Z" level=info msg="Container to stop \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:49:10.791263 env[1350]: time="2024-07-02T09:49:10.791185964Z" level=info msg="Container to stop \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:49:10.791263 env[1350]: time="2024-07-02T09:49:10.791191797Z" level=info msg="Container to stop \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:49:10.793176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d-shm.mount: Deactivated successfully. Jul 2 09:49:10.801764 env[1350]: time="2024-07-02T09:49:10.801729374Z" level=info msg="shim disconnected" id=575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d Jul 2 09:49:10.801764 env[1350]: time="2024-07-02T09:49:10.801764869Z" level=warning msg="cleaning up after shim disconnected" id=575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d namespace=k8s.io Jul 2 09:49:10.801905 env[1350]: time="2024-07-02T09:49:10.801773134Z" level=info msg="cleaning up dead shim" Jul 2 09:49:10.802036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d-rootfs.mount: Deactivated successfully. Jul 2 09:49:10.805758 env[1350]: time="2024-07-02T09:49:10.805711920Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:49:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3408 runtime=io.containerd.runc.v2\n" Jul 2 09:49:10.805957 env[1350]: time="2024-07-02T09:49:10.805903462Z" level=info msg="TearDown network for sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" successfully" Jul 2 09:49:10.805957 env[1350]: time="2024-07-02T09:49:10.805917680Z" level=info msg="StopPodSandbox for \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" returns successfully" Jul 2 09:49:11.004761 kubelet[1685]: I0702 09:49:11.004649 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-etc-cni-netd\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.004761 kubelet[1685]: I0702 09:49:11.004747 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-run\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.005347 kubelet[1685]: I0702 09:49:11.004830 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-cgroup\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.005347 kubelet[1685]: I0702 09:49:11.004782 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.005347 kubelet[1685]: I0702 09:49:11.004899 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cni-path\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.005347 kubelet[1685]: I0702 09:49:11.004898 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.005347 kubelet[1685]: I0702 09:49:11.004961 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-net\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.006296 kubelet[1685]: I0702 09:49:11.004969 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.006296 kubelet[1685]: I0702 09:49:11.005035 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-hubble-tls\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.006296 kubelet[1685]: I0702 09:49:11.005038 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cni-path" (OuterVolumeSpecName: "cni-path") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.006296 kubelet[1685]: I0702 09:49:11.005079 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.006296 kubelet[1685]: I0702 09:49:11.005104 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6ftn\" (UniqueName: \"kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-kube-api-access-s6ftn\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.007050 kubelet[1685]: I0702 09:49:11.005230 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-xtables-lock\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.007050 kubelet[1685]: I0702 09:49:11.005299 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-hostproc\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.007050 kubelet[1685]: I0702 09:49:11.005335 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.007050 kubelet[1685]: I0702 09:49:11.005366 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50759504-e25c-4320-b5cd-8db8960523ae-cilium-config-path\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.007050 kubelet[1685]: I0702 09:49:11.005411 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-hostproc" (OuterVolumeSpecName: "hostproc") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.007050 kubelet[1685]: I0702 09:49:11.005534 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-lib-modules\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.007673 kubelet[1685]: I0702 09:49:11.005585 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.007673 kubelet[1685]: I0702 09:49:11.005643 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-kernel\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.007673 kubelet[1685]: I0702 09:49:11.005767 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-bpf-maps\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.007673 kubelet[1685]: I0702 09:49:11.005807 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.007673 kubelet[1685]: I0702 09:49:11.005748 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.005933 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50759504-e25c-4320-b5cd-8db8960523ae-clustermesh-secrets\") pod \"50759504-e25c-4320-b5cd-8db8960523ae\" (UID: \"50759504-e25c-4320-b5cd-8db8960523ae\") " Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.006057 1685 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-lib-modules\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.006124 1685 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-kernel\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.006175 1685 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-bpf-maps\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.006234 1685 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-etc-cni-netd\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.006286 1685 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-run\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.006339 1685 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cilium-cgroup\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.008208 kubelet[1685]: I0702 09:49:11.006391 1685 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-xtables-lock\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.009016 kubelet[1685]: I0702 09:49:11.006449 1685 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-cni-path\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.009016 kubelet[1685]: I0702 09:49:11.006507 1685 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-host-proc-sys-net\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.009016 kubelet[1685]: I0702 09:49:11.006565 1685 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50759504-e25c-4320-b5cd-8db8960523ae-hostproc\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.010027 kubelet[1685]: I0702 09:49:11.009991 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50759504-e25c-4320-b5cd-8db8960523ae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:49:11.010379 kubelet[1685]: I0702 09:49:11.010315 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:49:11.010379 kubelet[1685]: I0702 09:49:11.010325 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-kube-api-access-s6ftn" (OuterVolumeSpecName: "kube-api-access-s6ftn") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "kube-api-access-s6ftn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:49:11.010379 kubelet[1685]: I0702 09:49:11.010349 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50759504-e25c-4320-b5cd-8db8960523ae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "50759504-e25c-4320-b5cd-8db8960523ae" (UID: "50759504-e25c-4320-b5cd-8db8960523ae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 09:49:11.012062 systemd[1]: var-lib-kubelet-pods-50759504\x2de25c\x2d4320\x2db5cd\x2d8db8960523ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds6ftn.mount: Deactivated successfully. Jul 2 09:49:11.012189 systemd[1]: var-lib-kubelet-pods-50759504\x2de25c\x2d4320\x2db5cd\x2d8db8960523ae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 09:49:11.012302 systemd[1]: var-lib-kubelet-pods-50759504\x2de25c\x2d4320\x2db5cd\x2d8db8960523ae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 09:49:11.107399 kubelet[1685]: I0702 09:49:11.107292 1685 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-hubble-tls\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.107399 kubelet[1685]: I0702 09:49:11.107368 1685 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s6ftn\" (UniqueName: \"kubernetes.io/projected/50759504-e25c-4320-b5cd-8db8960523ae-kube-api-access-s6ftn\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.107399 kubelet[1685]: I0702 09:49:11.107405 1685 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50759504-e25c-4320-b5cd-8db8960523ae-cilium-config-path\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.107984 kubelet[1685]: I0702 09:49:11.107445 1685 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50759504-e25c-4320-b5cd-8db8960523ae-clustermesh-secrets\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:11.158961 kubelet[1685]: E0702 09:49:11.158845 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:11.428664 kubelet[1685]: I0702 09:49:11.428470 1685 scope.go:117] "RemoveContainer" containerID="b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd" Jul 2 09:49:11.431608 env[1350]: time="2024-07-02T09:49:11.431525591Z" level=info msg="RemoveContainer for \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\"" Jul 2 09:49:11.434774 env[1350]: time="2024-07-02T09:49:11.434734465Z" level=info msg="RemoveContainer for \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\" returns successfully" Jul 2 09:49:11.434926 kubelet[1685]: I0702 09:49:11.434879 1685 scope.go:117] "RemoveContainer" containerID="3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe" Jul 2 09:49:11.435598 env[1350]: time="2024-07-02T09:49:11.435560485Z" level=info msg="RemoveContainer for \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\"" Jul 2 09:49:11.436944 env[1350]: time="2024-07-02T09:49:11.436895060Z" level=info msg="RemoveContainer for \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\" returns successfully" Jul 2 09:49:11.437116 kubelet[1685]: I0702 09:49:11.437042 1685 scope.go:117] "RemoveContainer" containerID="54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a" Jul 2 09:49:11.437708 env[1350]: time="2024-07-02T09:49:11.437673816Z" level=info msg="RemoveContainer for \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\"" Jul 2 09:49:11.438741 env[1350]: time="2024-07-02T09:49:11.438706471Z" level=info msg="RemoveContainer for \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\" returns successfully" Jul 2 09:49:11.438807 kubelet[1685]: I0702 09:49:11.438796 1685 scope.go:117] "RemoveContainer" containerID="8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4" Jul 2 09:49:11.439418 env[1350]: time="2024-07-02T09:49:11.439389930Z" level=info msg="RemoveContainer for \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\"" Jul 2 09:49:11.440355 env[1350]: time="2024-07-02T09:49:11.440344275Z" level=info msg="RemoveContainer for \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\" returns successfully" Jul 2 09:49:11.440432 kubelet[1685]: I0702 09:49:11.440424 1685 scope.go:117] "RemoveContainer" containerID="7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4" Jul 2 09:49:11.440956 env[1350]: time="2024-07-02T09:49:11.440915608Z" level=info msg="RemoveContainer for \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\"" Jul 2 09:49:11.442092 env[1350]: time="2024-07-02T09:49:11.442053170Z" level=info msg="RemoveContainer for \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\" returns successfully" Jul 2 09:49:11.442142 kubelet[1685]: I0702 09:49:11.442115 1685 scope.go:117] "RemoveContainer" containerID="b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd" Jul 2 09:49:11.442289 env[1350]: time="2024-07-02T09:49:11.442221431Z" level=error msg="ContainerStatus for \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\": not found" Jul 2 09:49:11.442371 kubelet[1685]: E0702 09:49:11.442350 1685 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\": not found" containerID="b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd" Jul 2 09:49:11.442448 kubelet[1685]: I0702 09:49:11.442418 1685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd"} err="failed to get container status \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b66eaaa0a40dc5689f22115223f966c27f2ee7c8fc7209b86a42078a8fb9acbd\": not found" Jul 2 09:49:11.442448 kubelet[1685]: I0702 09:49:11.442425 1685 scope.go:117] "RemoveContainer" containerID="3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe" Jul 2 09:49:11.442599 env[1350]: time="2024-07-02T09:49:11.442562184Z" level=error msg="ContainerStatus for \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\": not found" Jul 2 09:49:11.442671 kubelet[1685]: E0702 09:49:11.442665 1685 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\": not found" containerID="3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe" Jul 2 09:49:11.442695 kubelet[1685]: I0702 09:49:11.442682 1685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe"} err="failed to get container status \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\": rpc error: code = NotFound desc = an error occurred when try to find container \"3adac6758886701aa33d3906713c7bd981907717320099bf9a1fc18d97635efe\": not found" Jul 2 09:49:11.442695 kubelet[1685]: I0702 09:49:11.442689 1685 scope.go:117] "RemoveContainer" containerID="54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a" Jul 2 09:49:11.442780 env[1350]: time="2024-07-02T09:49:11.442758128Z" level=error msg="ContainerStatus for \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\": not found" Jul 2 09:49:11.442887 kubelet[1685]: E0702 09:49:11.442854 1685 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\": not found" containerID="54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a" Jul 2 09:49:11.442887 kubelet[1685]: I0702 09:49:11.442870 1685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a"} err="failed to get container status \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\": rpc error: code = NotFound desc = an error occurred when try to find container \"54b6bd548a8c93a4790cb3b0b98ddfa5cbe395c57447723b64e6fbb1dfb4455a\": not found" Jul 2 09:49:11.442887 kubelet[1685]: I0702 09:49:11.442875 1685 scope.go:117] "RemoveContainer" containerID="8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4" Jul 2 09:49:11.442990 env[1350]: time="2024-07-02T09:49:11.442966936Z" level=error msg="ContainerStatus for \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\": not found" Jul 2 09:49:11.443082 kubelet[1685]: E0702 09:49:11.443077 1685 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\": not found" containerID="8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4" Jul 2 09:49:11.443105 kubelet[1685]: I0702 09:49:11.443090 1685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4"} err="failed to get container status \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d93bac9ee0e84baa0e7c79eecb8572f4c7f6f457e08827923556145606ed1d4\": not found" Jul 2 09:49:11.443105 kubelet[1685]: I0702 09:49:11.443095 1685 scope.go:117] "RemoveContainer" containerID="7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4" Jul 2 09:49:11.443266 env[1350]: time="2024-07-02T09:49:11.443199175Z" level=error msg="ContainerStatus for \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\": not found" Jul 2 09:49:11.443349 kubelet[1685]: E0702 09:49:11.443344 1685 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\": not found" containerID="7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4" Jul 2 09:49:11.443370 kubelet[1685]: I0702 09:49:11.443357 1685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4"} err="failed to get container status \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d2bd412686cbb867e68a7d3490dcb3794d906bc7b399e26c99c66a57adcf7c4\": not found" Jul 2 09:49:11.939350 kubelet[1685]: I0702 09:49:11.939248 1685 topology_manager.go:215] "Topology Admit Handler" podUID="ebdf8795-474e-446b-b7fa-2c89da44124b" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-dbzh6" Jul 2 09:49:11.939673 kubelet[1685]: E0702 09:49:11.939402 1685 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50759504-e25c-4320-b5cd-8db8960523ae" containerName="mount-cgroup" Jul 2 09:49:11.939673 kubelet[1685]: E0702 09:49:11.939440 1685 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50759504-e25c-4320-b5cd-8db8960523ae" containerName="apply-sysctl-overwrites" Jul 2 09:49:11.939673 kubelet[1685]: E0702 09:49:11.939461 1685 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50759504-e25c-4320-b5cd-8db8960523ae" containerName="cilium-agent" Jul 2 09:49:11.939673 kubelet[1685]: E0702 09:49:11.939485 1685 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50759504-e25c-4320-b5cd-8db8960523ae" containerName="mount-bpf-fs" Jul 2 09:49:11.939673 kubelet[1685]: E0702 09:49:11.939506 1685 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50759504-e25c-4320-b5cd-8db8960523ae" containerName="clean-cilium-state" Jul 2 09:49:11.939673 kubelet[1685]: I0702 09:49:11.939556 1685 memory_manager.go:346] "RemoveStaleState removing state" podUID="50759504-e25c-4320-b5cd-8db8960523ae" containerName="cilium-agent" Jul 2 09:49:11.942784 kubelet[1685]: I0702 09:49:11.942731 1685 topology_manager.go:215] "Topology Admit Handler" podUID="6469d47c-d900-4aff-b914-16f3ed1c60d7" podNamespace="kube-system" podName="cilium-2mvcl" Jul 2 09:49:12.110849 kubelet[1685]: E0702 09:49:12.110749 1685 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-p697z lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-2mvcl" podUID="6469d47c-d900-4aff-b914-16f3ed1c60d7" Jul 2 09:49:12.114712 kubelet[1685]: I0702 09:49:12.114622 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfmsh\" (UniqueName: \"kubernetes.io/projected/ebdf8795-474e-446b-b7fa-2c89da44124b-kube-api-access-jfmsh\") pod \"cilium-operator-6bc8ccdb58-dbzh6\" (UID: \"ebdf8795-474e-446b-b7fa-2c89da44124b\") " pod="kube-system/cilium-operator-6bc8ccdb58-dbzh6" Jul 2 09:49:12.114950 kubelet[1685]: I0702 09:49:12.114728 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-kernel\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115079 kubelet[1685]: I0702 09:49:12.114944 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebdf8795-474e-446b-b7fa-2c89da44124b-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-dbzh6\" (UID: \"ebdf8795-474e-446b-b7fa-2c89da44124b\") " pod="kube-system/cilium-operator-6bc8ccdb58-dbzh6" Jul 2 09:49:12.115221 kubelet[1685]: I0702 09:49:12.115092 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cni-path\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115221 kubelet[1685]: I0702 09:49:12.115186 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-config-path\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115433 kubelet[1685]: I0702 09:49:12.115256 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-ipsec-secrets\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115433 kubelet[1685]: I0702 09:49:12.115378 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-hostproc\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115642 kubelet[1685]: I0702 09:49:12.115513 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-etc-cni-netd\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115642 kubelet[1685]: I0702 09:49:12.115584 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-lib-modules\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115874 kubelet[1685]: I0702 09:49:12.115646 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-xtables-lock\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.115874 kubelet[1685]: I0702 09:49:12.115776 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-clustermesh-secrets\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.116069 kubelet[1685]: I0702 09:49:12.115896 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p697z\" (UniqueName: \"kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-kube-api-access-p697z\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.116069 kubelet[1685]: I0702 09:49:12.115968 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-cgroup\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.116069 kubelet[1685]: I0702 09:49:12.116048 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-net\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.116386 kubelet[1685]: I0702 09:49:12.116109 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-hubble-tls\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.116386 kubelet[1685]: I0702 09:49:12.116174 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-bpf-maps\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.116386 kubelet[1685]: I0702 09:49:12.116320 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-run\") pod \"cilium-2mvcl\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " pod="kube-system/cilium-2mvcl" Jul 2 09:49:12.159501 kubelet[1685]: E0702 09:49:12.159395 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:12.245573 env[1350]: time="2024-07-02T09:49:12.245495637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-dbzh6,Uid:ebdf8795-474e-446b-b7fa-2c89da44124b,Namespace:kube-system,Attempt:0,}" Jul 2 09:49:12.251448 env[1350]: time="2024-07-02T09:49:12.251409058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:49:12.251448 env[1350]: time="2024-07-02T09:49:12.251432411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:49:12.251448 env[1350]: time="2024-07-02T09:49:12.251440720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:49:12.251570 env[1350]: time="2024-07-02T09:49:12.251519521Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbc4ac2c53c508fb3f90ea2b082d4b42e02d7e7f86bfcc66f0dc20aef36fe38 pid=3436 runtime=io.containerd.runc.v2 Jul 2 09:49:12.262140 kubelet[1685]: I0702 09:49:12.262095 1685 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="50759504-e25c-4320-b5cd-8db8960523ae" path="/var/lib/kubelet/pods/50759504-e25c-4320-b5cd-8db8960523ae/volumes" Jul 2 09:49:12.289273 env[1350]: time="2024-07-02T09:49:12.289204966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-dbzh6,Uid:ebdf8795-474e-446b-b7fa-2c89da44124b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdbc4ac2c53c508fb3f90ea2b082d4b42e02d7e7f86bfcc66f0dc20aef36fe38\"" Jul 2 09:49:12.290160 env[1350]: time="2024-07-02T09:49:12.290106398Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 09:49:12.520034 kubelet[1685]: I0702 09:49:12.519806 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-ipsec-secrets\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.520034 kubelet[1685]: I0702 09:49:12.519912 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-bpf-maps\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.520034 kubelet[1685]: I0702 09:49:12.519976 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-run\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.520566 kubelet[1685]: I0702 09:49:12.520058 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-kernel\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.520566 kubelet[1685]: I0702 09:49:12.520059 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.520566 kubelet[1685]: I0702 09:49:12.520161 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-etc-cni-netd\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.520566 kubelet[1685]: I0702 09:49:12.520176 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.520566 kubelet[1685]: I0702 09:49:12.520239 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.521497 kubelet[1685]: I0702 09:49:12.520279 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-clustermesh-secrets\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.521497 kubelet[1685]: I0702 09:49:12.520304 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.521497 kubelet[1685]: I0702 09:49:12.520411 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-cgroup\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.521497 kubelet[1685]: I0702 09:49:12.520476 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-xtables-lock\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.521497 kubelet[1685]: I0702 09:49:12.520493 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.522158 kubelet[1685]: I0702 09:49:12.520554 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p697z\" (UniqueName: \"kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-kube-api-access-p697z\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.522158 kubelet[1685]: I0702 09:49:12.520585 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.522158 kubelet[1685]: I0702 09:49:12.520618 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-lib-modules\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.522158 kubelet[1685]: I0702 09:49:12.520654 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.522158 kubelet[1685]: I0702 09:49:12.520764 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cni-path\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.522686 kubelet[1685]: I0702 09:49:12.520851 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cni-path" (OuterVolumeSpecName: "cni-path") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.522686 kubelet[1685]: I0702 09:49:12.520925 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-hubble-tls\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.522686 kubelet[1685]: I0702 09:49:12.521043 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-config-path\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.522686 kubelet[1685]: I0702 09:49:12.521131 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-hostproc\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.522686 kubelet[1685]: I0702 09:49:12.521228 1685 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-net\") pod \"6469d47c-d900-4aff-b914-16f3ed1c60d7\" (UID: \"6469d47c-d900-4aff-b914-16f3ed1c60d7\") " Jul 2 09:49:12.522686 kubelet[1685]: I0702 09:49:12.521311 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-hostproc" (OuterVolumeSpecName: "hostproc") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521357 1685 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-cgroup\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521335 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521424 1685 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-xtables-lock\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521459 1685 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-lib-modules\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521490 1685 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cni-path\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521520 1685 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-bpf-maps\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521557 1685 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-run\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.523307 kubelet[1685]: I0702 09:49:12.521620 1685 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-kernel\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.524101 kubelet[1685]: I0702 09:49:12.521677 1685 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-etc-cni-netd\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.525937 kubelet[1685]: I0702 09:49:12.525848 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:49:12.526318 kubelet[1685]: I0702 09:49:12.526288 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 09:49:12.526318 kubelet[1685]: I0702 09:49:12.526308 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 09:49:12.526507 kubelet[1685]: I0702 09:49:12.526466 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:49:12.526507 kubelet[1685]: I0702 09:49:12.526494 1685 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-kube-api-access-p697z" (OuterVolumeSpecName: "kube-api-access-p697z") pod "6469d47c-d900-4aff-b914-16f3ed1c60d7" (UID: "6469d47c-d900-4aff-b914-16f3ed1c60d7"). InnerVolumeSpecName "kube-api-access-p697z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:49:12.622101 kubelet[1685]: I0702 09:49:12.621995 1685 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-ipsec-secrets\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.622101 kubelet[1685]: I0702 09:49:12.622073 1685 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6469d47c-d900-4aff-b914-16f3ed1c60d7-clustermesh-secrets\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.622101 kubelet[1685]: I0702 09:49:12.622111 1685 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p697z\" (UniqueName: \"kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-kube-api-access-p697z\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.622617 kubelet[1685]: I0702 09:49:12.622142 1685 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6469d47c-d900-4aff-b914-16f3ed1c60d7-hubble-tls\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.622617 kubelet[1685]: I0702 09:49:12.622175 1685 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6469d47c-d900-4aff-b914-16f3ed1c60d7-cilium-config-path\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.622617 kubelet[1685]: I0702 09:49:12.622205 1685 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-host-proc-sys-net\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:12.622617 kubelet[1685]: I0702 09:49:12.622237 1685 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6469d47c-d900-4aff-b914-16f3ed1c60d7-hostproc\") on node \"10.67.80.19\" DevicePath \"\"" Jul 2 09:49:13.160041 kubelet[1685]: E0702 09:49:13.159958 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:13.180759 kubelet[1685]: E0702 09:49:13.180696 1685 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 09:49:13.230212 systemd[1]: var-lib-kubelet-pods-6469d47c\x2dd900\x2d4aff\x2db914\x2d16f3ed1c60d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp697z.mount: Deactivated successfully. Jul 2 09:49:13.230336 systemd[1]: var-lib-kubelet-pods-6469d47c\x2dd900\x2d4aff\x2db914\x2d16f3ed1c60d7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 09:49:13.230416 systemd[1]: var-lib-kubelet-pods-6469d47c\x2dd900\x2d4aff\x2db914\x2d16f3ed1c60d7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 09:49:13.230506 systemd[1]: var-lib-kubelet-pods-6469d47c\x2dd900\x2d4aff\x2db914\x2d16f3ed1c60d7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 09:49:13.465828 kubelet[1685]: I0702 09:49:13.465611 1685 topology_manager.go:215] "Topology Admit Handler" podUID="613380cc-c77a-447b-bf04-4fdf17d188e6" podNamespace="kube-system" podName="cilium-kgx56" Jul 2 09:49:13.529015 kubelet[1685]: I0702 09:49:13.528955 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-bpf-maps\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.529349 kubelet[1685]: I0702 09:49:13.529060 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-etc-cni-netd\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.529349 kubelet[1685]: I0702 09:49:13.529255 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-cilium-run\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.529657 kubelet[1685]: I0702 09:49:13.529353 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-hostproc\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.529657 kubelet[1685]: I0702 09:49:13.529436 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-cilium-cgroup\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.529657 kubelet[1685]: I0702 09:49:13.529501 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-xtables-lock\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.529657 kubelet[1685]: I0702 09:49:13.529629 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/613380cc-c77a-447b-bf04-4fdf17d188e6-cilium-ipsec-secrets\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530200 kubelet[1685]: I0702 09:49:13.529757 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-cni-path\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530200 kubelet[1685]: I0702 09:49:13.529849 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/613380cc-c77a-447b-bf04-4fdf17d188e6-cilium-config-path\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530200 kubelet[1685]: I0702 09:49:13.529999 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkfwh\" (UniqueName: \"kubernetes.io/projected/613380cc-c77a-447b-bf04-4fdf17d188e6-kube-api-access-mkfwh\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530200 kubelet[1685]: I0702 09:49:13.530138 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-host-proc-sys-net\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530684 kubelet[1685]: I0702 09:49:13.530329 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-lib-modules\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530684 kubelet[1685]: I0702 09:49:13.530430 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/613380cc-c77a-447b-bf04-4fdf17d188e6-host-proc-sys-kernel\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530684 kubelet[1685]: I0702 09:49:13.530494 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/613380cc-c77a-447b-bf04-4fdf17d188e6-hubble-tls\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.530684 kubelet[1685]: I0702 09:49:13.530559 1685 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/613380cc-c77a-447b-bf04-4fdf17d188e6-clustermesh-secrets\") pod \"cilium-kgx56\" (UID: \"613380cc-c77a-447b-bf04-4fdf17d188e6\") " pod="kube-system/cilium-kgx56" Jul 2 09:49:13.743841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006933510.mount: Deactivated successfully. Jul 2 09:49:13.771212 env[1350]: time="2024-07-02T09:49:13.771189974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgx56,Uid:613380cc-c77a-447b-bf04-4fdf17d188e6,Namespace:kube-system,Attempt:0,}" Jul 2 09:49:13.776352 env[1350]: time="2024-07-02T09:49:13.776291003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:49:13.776352 env[1350]: time="2024-07-02T09:49:13.776313055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:49:13.776352 env[1350]: time="2024-07-02T09:49:13.776320308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:49:13.776476 env[1350]: time="2024-07-02T09:49:13.776402947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d pid=3484 runtime=io.containerd.runc.v2 Jul 2 09:49:13.792639 env[1350]: time="2024-07-02T09:49:13.792582544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgx56,Uid:613380cc-c77a-447b-bf04-4fdf17d188e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\"" Jul 2 09:49:13.793884 env[1350]: time="2024-07-02T09:49:13.793842563Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:49:13.797724 env[1350]: time="2024-07-02T09:49:13.797683966Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3db1d6256dcbe71d03f94064816c182514984b4e9349ff53c14d23b3bff682c\"" Jul 2 09:49:13.798067 env[1350]: time="2024-07-02T09:49:13.798008738Z" level=info msg="StartContainer for \"a3db1d6256dcbe71d03f94064816c182514984b4e9349ff53c14d23b3bff682c\"" Jul 2 09:49:13.819374 env[1350]: time="2024-07-02T09:49:13.819322777Z" level=info msg="StartContainer for \"a3db1d6256dcbe71d03f94064816c182514984b4e9349ff53c14d23b3bff682c\" returns successfully" Jul 2 09:49:13.885794 env[1350]: time="2024-07-02T09:49:13.885763294Z" level=info msg="shim disconnected" id=a3db1d6256dcbe71d03f94064816c182514984b4e9349ff53c14d23b3bff682c Jul 2 09:49:13.885794 env[1350]: time="2024-07-02T09:49:13.885795171Z" level=warning msg="cleaning up after shim disconnected" id=a3db1d6256dcbe71d03f94064816c182514984b4e9349ff53c14d23b3bff682c namespace=k8s.io Jul 2 09:49:13.885923 env[1350]: time="2024-07-02T09:49:13.885803151Z" level=info msg="cleaning up dead shim" Jul 2 09:49:13.889438 env[1350]: time="2024-07-02T09:49:13.889393002Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:49:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3568 runtime=io.containerd.runc.v2\n" Jul 2 09:49:14.151293 env[1350]: time="2024-07-02T09:49:14.151270589Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:49:14.151918 env[1350]: time="2024-07-02T09:49:14.151876756Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:49:14.152554 env[1350]: time="2024-07-02T09:49:14.152508470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 09:49:14.152865 env[1350]: time="2024-07-02T09:49:14.152813146Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 09:49:14.154109 env[1350]: time="2024-07-02T09:49:14.154065371Z" level=info msg="CreateContainer within sandbox \"fdbc4ac2c53c508fb3f90ea2b082d4b42e02d7e7f86bfcc66f0dc20aef36fe38\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 09:49:14.157578 env[1350]: time="2024-07-02T09:49:14.157535900Z" level=info msg="CreateContainer within sandbox \"fdbc4ac2c53c508fb3f90ea2b082d4b42e02d7e7f86bfcc66f0dc20aef36fe38\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e124c79388d1cf8688d4a2293bf8376b211700579de6c84b310df64f125f0700\"" Jul 2 09:49:14.157890 env[1350]: time="2024-07-02T09:49:14.157833934Z" level=info msg="StartContainer for \"e124c79388d1cf8688d4a2293bf8376b211700579de6c84b310df64f125f0700\"" Jul 2 09:49:14.160282 kubelet[1685]: E0702 09:49:14.160245 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:14.177689 env[1350]: time="2024-07-02T09:49:14.177663402Z" level=info msg="StartContainer for \"e124c79388d1cf8688d4a2293bf8376b211700579de6c84b310df64f125f0700\" returns successfully" Jul 2 09:49:14.262133 kubelet[1685]: I0702 09:49:14.262118 1685 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6469d47c-d900-4aff-b914-16f3ed1c60d7" path="/var/lib/kubelet/pods/6469d47c-d900-4aff-b914-16f3ed1c60d7/volumes" Jul 2 09:49:14.450492 env[1350]: time="2024-07-02T09:49:14.450251552Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:49:14.463133 kubelet[1685]: I0702 09:49:14.462250 1685 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-dbzh6" podStartSLOduration=1.5990002140000001 podCreationTimestamp="2024-07-02 09:49:11 +0000 UTC" firstStartedPulling="2024-07-02 09:49:12.289894862 +0000 UTC m=+54.379706056" lastFinishedPulling="2024-07-02 09:49:14.152975749 +0000 UTC m=+56.242786942" observedRunningTime="2024-07-02 09:49:14.460861904 +0000 UTC m=+56.550673194" watchObservedRunningTime="2024-07-02 09:49:14.4620811 +0000 UTC m=+56.551892345" Jul 2 09:49:14.465201 env[1350]: time="2024-07-02T09:49:14.465181945Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"351525a62d3ca8aaaa867957ff7c15e2493ac162cc62434bb4a7e685efe0917d\"" Jul 2 09:49:14.465533 env[1350]: time="2024-07-02T09:49:14.465511515Z" level=info msg="StartContainer for \"351525a62d3ca8aaaa867957ff7c15e2493ac162cc62434bb4a7e685efe0917d\"" Jul 2 09:49:14.486699 env[1350]: time="2024-07-02T09:49:14.486650257Z" level=info msg="StartContainer for \"351525a62d3ca8aaaa867957ff7c15e2493ac162cc62434bb4a7e685efe0917d\" returns successfully" Jul 2 09:49:14.599539 env[1350]: time="2024-07-02T09:49:14.599428961Z" level=info msg="shim disconnected" id=351525a62d3ca8aaaa867957ff7c15e2493ac162cc62434bb4a7e685efe0917d Jul 2 09:49:14.599953 env[1350]: time="2024-07-02T09:49:14.599550212Z" level=warning msg="cleaning up after shim disconnected" id=351525a62d3ca8aaaa867957ff7c15e2493ac162cc62434bb4a7e685efe0917d namespace=k8s.io Jul 2 09:49:14.599953 env[1350]: time="2024-07-02T09:49:14.599584053Z" level=info msg="cleaning up dead shim" Jul 2 09:49:14.616421 env[1350]: time="2024-07-02T09:49:14.616307405Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:49:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3676 runtime=io.containerd.runc.v2\n" Jul 2 09:49:15.161248 kubelet[1685]: E0702 09:49:15.161136 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:15.229474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-351525a62d3ca8aaaa867957ff7c15e2493ac162cc62434bb4a7e685efe0917d-rootfs.mount: Deactivated successfully. Jul 2 09:49:15.458256 env[1350]: time="2024-07-02T09:49:15.458018671Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:49:15.468895 env[1350]: time="2024-07-02T09:49:15.468880219Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d69b1c0399cf20076d4ae677a5d050e86608fd1ecab8186d4cf82234a08ee8af\"" Jul 2 09:49:15.469162 env[1350]: time="2024-07-02T09:49:15.469149087Z" level=info msg="StartContainer for \"d69b1c0399cf20076d4ae677a5d050e86608fd1ecab8186d4cf82234a08ee8af\"" Jul 2 09:49:15.493322 env[1350]: time="2024-07-02T09:49:15.493269889Z" level=info msg="StartContainer for \"d69b1c0399cf20076d4ae677a5d050e86608fd1ecab8186d4cf82234a08ee8af\" returns successfully" Jul 2 09:49:15.504347 env[1350]: time="2024-07-02T09:49:15.504316396Z" level=info msg="shim disconnected" id=d69b1c0399cf20076d4ae677a5d050e86608fd1ecab8186d4cf82234a08ee8af Jul 2 09:49:15.504347 env[1350]: time="2024-07-02T09:49:15.504347600Z" level=warning msg="cleaning up after shim disconnected" id=d69b1c0399cf20076d4ae677a5d050e86608fd1ecab8186d4cf82234a08ee8af namespace=k8s.io Jul 2 09:49:15.504471 env[1350]: time="2024-07-02T09:49:15.504353727Z" level=info msg="cleaning up dead shim" Jul 2 09:49:15.507937 env[1350]: time="2024-07-02T09:49:15.507920026Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:49:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3732 runtime=io.containerd.runc.v2\n" Jul 2 09:49:16.161720 kubelet[1685]: E0702 09:49:16.161594 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:16.229396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d69b1c0399cf20076d4ae677a5d050e86608fd1ecab8186d4cf82234a08ee8af-rootfs.mount: Deactivated successfully. Jul 2 09:49:16.467500 env[1350]: time="2024-07-02T09:49:16.467302549Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:49:16.475984 env[1350]: time="2024-07-02T09:49:16.475964607Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d00215dcdfbba64d474e7e52fb21133ab3e422190ac342286b2767f938ec7a0\"" Jul 2 09:49:16.476187 env[1350]: time="2024-07-02T09:49:16.476173057Z" level=info msg="StartContainer for \"9d00215dcdfbba64d474e7e52fb21133ab3e422190ac342286b2767f938ec7a0\"" Jul 2 09:49:16.477713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632923979.mount: Deactivated successfully. Jul 2 09:49:16.496912 env[1350]: time="2024-07-02T09:49:16.496889467Z" level=info msg="StartContainer for \"9d00215dcdfbba64d474e7e52fb21133ab3e422190ac342286b2767f938ec7a0\" returns successfully" Jul 2 09:49:16.505722 env[1350]: time="2024-07-02T09:49:16.505695247Z" level=info msg="shim disconnected" id=9d00215dcdfbba64d474e7e52fb21133ab3e422190ac342286b2767f938ec7a0 Jul 2 09:49:16.505722 env[1350]: time="2024-07-02T09:49:16.505722014Z" level=warning msg="cleaning up after shim disconnected" id=9d00215dcdfbba64d474e7e52fb21133ab3e422190ac342286b2767f938ec7a0 namespace=k8s.io Jul 2 09:49:16.505902 env[1350]: time="2024-07-02T09:49:16.505727900Z" level=info msg="cleaning up dead shim" Jul 2 09:49:16.509163 env[1350]: time="2024-07-02T09:49:16.509131019Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:49:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3786 runtime=io.containerd.runc.v2\n" Jul 2 09:49:17.162440 kubelet[1685]: E0702 09:49:17.162327 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:17.230026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d00215dcdfbba64d474e7e52fb21133ab3e422190ac342286b2767f938ec7a0-rootfs.mount: Deactivated successfully. Jul 2 09:49:17.477632 env[1350]: time="2024-07-02T09:49:17.477393110Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:49:17.508773 env[1350]: time="2024-07-02T09:49:17.508685935Z" level=info msg="CreateContainer within sandbox \"b83d363951194f2bd812cfb15bbcbfcb7a51a215c281e9b51ea9a609552b465d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"defc5d6f2576159f6b44f888f6ab2d393baf7602a1aa35ac37f65254cae4cbb0\"" Jul 2 09:49:17.509642 env[1350]: time="2024-07-02T09:49:17.509553910Z" level=info msg="StartContainer for \"defc5d6f2576159f6b44f888f6ab2d393baf7602a1aa35ac37f65254cae4cbb0\"" Jul 2 09:49:17.517777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846825351.mount: Deactivated successfully. Jul 2 09:49:17.538210 env[1350]: time="2024-07-02T09:49:17.538174772Z" level=info msg="StartContainer for \"defc5d6f2576159f6b44f888f6ab2d393baf7602a1aa35ac37f65254cae4cbb0\" returns successfully" Jul 2 09:49:17.679799 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 09:49:18.124207 kubelet[1685]: E0702 09:49:18.124088 1685 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:18.141084 env[1350]: time="2024-07-02T09:49:18.140964404Z" level=info msg="StopPodSandbox for \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\"" Jul 2 09:49:18.141367 env[1350]: time="2024-07-02T09:49:18.141195297Z" level=info msg="TearDown network for sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" successfully" Jul 2 09:49:18.141367 env[1350]: time="2024-07-02T09:49:18.141288556Z" level=info msg="StopPodSandbox for \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" returns successfully" Jul 2 09:49:18.142315 env[1350]: time="2024-07-02T09:49:18.142247381Z" level=info msg="RemovePodSandbox for \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\"" Jul 2 09:49:18.142455 env[1350]: time="2024-07-02T09:49:18.142325038Z" level=info msg="Forcibly stopping sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\"" Jul 2 09:49:18.142574 env[1350]: time="2024-07-02T09:49:18.142509404Z" level=info msg="TearDown network for sandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" successfully" Jul 2 09:49:18.148616 env[1350]: time="2024-07-02T09:49:18.148543787Z" level=info msg="RemovePodSandbox \"575fad61d818e865a6570c653ad616714be002ad289f365d33111bec73dd193d\" returns successfully" Jul 2 09:49:18.162833 kubelet[1685]: E0702 09:49:18.162733 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:18.494460 kubelet[1685]: I0702 09:49:18.494252 1685 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kgx56" podStartSLOduration=5.494160681 podCreationTimestamp="2024-07-02 09:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:49:18.493856843 +0000 UTC m=+60.583668098" watchObservedRunningTime="2024-07-02 09:49:18.494160681 +0000 UTC m=+60.583971915" Jul 2 09:49:19.163828 kubelet[1685]: E0702 09:49:19.163777 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:20.164719 kubelet[1685]: E0702 09:49:20.164607 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:20.526646 systemd-networkd[1101]: lxc_health: Link UP Jul 2 09:49:20.550102 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 09:49:20.549776 systemd-networkd[1101]: lxc_health: Gained carrier Jul 2 09:49:21.165701 kubelet[1685]: E0702 09:49:21.165643 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:21.624928 systemd-networkd[1101]: lxc_health: Gained IPv6LL Jul 2 09:49:22.165852 kubelet[1685]: E0702 09:49:22.165802 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:23.167177 kubelet[1685]: E0702 09:49:23.167059 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:24.167442 kubelet[1685]: E0702 09:49:24.167329 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:25.167656 kubelet[1685]: E0702 09:49:25.167539 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:26.168941 kubelet[1685]: E0702 09:49:26.168833 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:27.170104 kubelet[1685]: E0702 09:49:27.169996 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 09:49:28.171094 kubelet[1685]: E0702 09:49:28.170971 1685 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"