Oct 2 20:20:08.567925 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 20:20:08.567938 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:20:08.567945 kernel: BIOS-provided physical RAM map: Oct 2 20:20:08.567949 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Oct 2 20:20:08.567952 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Oct 2 20:20:08.567956 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Oct 2 20:20:08.567961 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Oct 2 20:20:08.567965 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Oct 2 20:20:08.567968 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000825dcfff] usable Oct 2 20:20:08.567972 kernel: BIOS-e820: [mem 0x00000000825dd000-0x00000000825ddfff] ACPI NVS Oct 2 20:20:08.567977 kernel: BIOS-e820: [mem 0x00000000825de000-0x00000000825defff] reserved Oct 2 20:20:08.567981 kernel: BIOS-e820: [mem 0x00000000825df000-0x000000008afccfff] usable Oct 2 20:20:08.567985 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Oct 2 20:20:08.567988 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Oct 2 20:20:08.567993 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Oct 2 20:20:08.567998 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Oct 2 20:20:08.568003 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Oct 2 20:20:08.568007 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Oct 2 20:20:08.568011 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 2 20:20:08.568015 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Oct 2 20:20:08.568019 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Oct 2 20:20:08.568023 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 2 20:20:08.568027 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Oct 2 20:20:08.568032 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Oct 2 20:20:08.568036 kernel: NX (Execute Disable) protection: active Oct 2 20:20:08.568040 kernel: SMBIOS 3.2.1 present. Oct 2 20:20:08.568045 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Oct 2 20:20:08.568049 kernel: tsc: Detected 3400.000 MHz processor Oct 2 20:20:08.568053 kernel: tsc: Detected 3399.906 MHz TSC Oct 2 20:20:08.568058 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 20:20:08.568062 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 20:20:08.568067 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Oct 2 20:20:08.568071 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 20:20:08.568075 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Oct 2 20:20:08.568080 kernel: Using GB pages for direct mapping Oct 2 20:20:08.568084 kernel: ACPI: Early table checksum verification disabled Oct 2 20:20:08.568089 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Oct 2 20:20:08.568093 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Oct 2 20:20:08.568098 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Oct 2 20:20:08.568102 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Oct 2 20:20:08.568108 kernel: ACPI: FACS 0x000000008C66CF80 000040 Oct 2 20:20:08.568113 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Oct 2 20:20:08.568118 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Oct 2 20:20:08.568123 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Oct 2 20:20:08.568128 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Oct 2 20:20:08.568132 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Oct 2 20:20:08.568137 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Oct 2 20:20:08.568142 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Oct 2 20:20:08.568147 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Oct 2 20:20:08.568151 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 20:20:08.568156 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Oct 2 20:20:08.568161 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Oct 2 20:20:08.568166 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 20:20:08.568171 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 20:20:08.568175 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Oct 2 20:20:08.568180 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Oct 2 20:20:08.568184 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 20:20:08.568189 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Oct 2 20:20:08.568195 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Oct 2 20:20:08.568199 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Oct 2 20:20:08.568204 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Oct 2 20:20:08.568209 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Oct 2 20:20:08.568213 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Oct 2 20:20:08.568218 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Oct 2 20:20:08.568222 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Oct 2 20:20:08.568227 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Oct 2 20:20:08.568232 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Oct 2 20:20:08.568237 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Oct 2 20:20:08.568242 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Oct 2 20:20:08.568246 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Oct 2 20:20:08.568251 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Oct 2 20:20:08.568256 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Oct 2 20:20:08.568260 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Oct 2 20:20:08.568265 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Oct 2 20:20:08.568270 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Oct 2 20:20:08.568274 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Oct 2 20:20:08.568280 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Oct 2 20:20:08.568284 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Oct 2 20:20:08.568289 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Oct 2 20:20:08.568293 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Oct 2 20:20:08.568298 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Oct 2 20:20:08.568303 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Oct 2 20:20:08.568307 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Oct 2 20:20:08.568312 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Oct 2 20:20:08.568317 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Oct 2 20:20:08.568322 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Oct 2 20:20:08.568327 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Oct 2 20:20:08.568331 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Oct 2 20:20:08.568336 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Oct 2 20:20:08.568340 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Oct 2 20:20:08.568345 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Oct 2 20:20:08.568350 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Oct 2 20:20:08.568354 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Oct 2 20:20:08.568359 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Oct 2 20:20:08.568364 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Oct 2 20:20:08.568369 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Oct 2 20:20:08.568374 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Oct 2 20:20:08.568378 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Oct 2 20:20:08.568383 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Oct 2 20:20:08.568387 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Oct 2 20:20:08.568392 kernel: No NUMA configuration found Oct 2 20:20:08.568397 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Oct 2 20:20:08.568404 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Oct 2 20:20:08.568409 kernel: Zone ranges: Oct 2 20:20:08.568414 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 20:20:08.568419 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 2 20:20:08.568423 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Oct 2 20:20:08.568428 kernel: Movable zone start for each node Oct 2 20:20:08.568433 kernel: Early memory node ranges Oct 2 20:20:08.568437 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Oct 2 20:20:08.568442 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Oct 2 20:20:08.568447 kernel: node 0: [mem 0x0000000040400000-0x00000000825dcfff] Oct 2 20:20:08.568452 kernel: node 0: [mem 0x00000000825df000-0x000000008afccfff] Oct 2 20:20:08.568457 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Oct 2 20:20:08.568461 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Oct 2 20:20:08.568466 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Oct 2 20:20:08.568471 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Oct 2 20:20:08.568476 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 20:20:08.568483 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Oct 2 20:20:08.568489 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Oct 2 20:20:08.568494 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Oct 2 20:20:08.568499 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Oct 2 20:20:08.568505 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Oct 2 20:20:08.568510 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Oct 2 20:20:08.568515 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Oct 2 20:20:08.568520 kernel: ACPI: PM-Timer IO Port: 0x1808 Oct 2 20:20:08.568525 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 2 20:20:08.568530 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 2 20:20:08.568535 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 2 20:20:08.568541 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 2 20:20:08.568546 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 2 20:20:08.568551 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 2 20:20:08.568556 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 2 20:20:08.568560 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 2 20:20:08.568566 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 2 20:20:08.568571 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 2 20:20:08.568575 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 2 20:20:08.568580 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 2 20:20:08.568586 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 2 20:20:08.568591 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 2 20:20:08.568596 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 2 20:20:08.568601 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 2 20:20:08.568606 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Oct 2 20:20:08.568611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 20:20:08.568616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 20:20:08.568621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 20:20:08.568626 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 20:20:08.568632 kernel: TSC deadline timer available Oct 2 20:20:08.568637 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Oct 2 20:20:08.568642 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Oct 2 20:20:08.568647 kernel: Booting paravirtualized kernel on bare hardware Oct 2 20:20:08.568652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 20:20:08.568657 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Oct 2 20:20:08.568662 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Oct 2 20:20:08.568667 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Oct 2 20:20:08.568672 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 2 20:20:08.568677 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Oct 2 20:20:08.568682 kernel: Policy zone: Normal Oct 2 20:20:08.568688 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:20:08.568693 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 20:20:08.568698 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Oct 2 20:20:08.568703 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Oct 2 20:20:08.568708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 20:20:08.568713 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 728000K reserved, 0K cma-reserved) Oct 2 20:20:08.568719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 2 20:20:08.568724 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 20:20:08.568729 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 20:20:08.568734 kernel: rcu: Hierarchical RCU implementation. Oct 2 20:20:08.568740 kernel: rcu: RCU event tracing is enabled. Oct 2 20:20:08.568745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 2 20:20:08.568750 kernel: Rude variant of Tasks RCU enabled. Oct 2 20:20:08.568755 kernel: Tracing variant of Tasks RCU enabled. Oct 2 20:20:08.568760 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 20:20:08.568766 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 2 20:20:08.568771 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Oct 2 20:20:08.568776 kernel: random: crng init done Oct 2 20:20:08.568781 kernel: Console: colour dummy device 80x25 Oct 2 20:20:08.568786 kernel: printk: console [tty0] enabled Oct 2 20:20:08.568791 kernel: printk: console [ttyS1] enabled Oct 2 20:20:08.568796 kernel: ACPI: Core revision 20210730 Oct 2 20:20:08.568801 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Oct 2 20:20:08.568805 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 20:20:08.568811 kernel: DMAR: Host address width 39 Oct 2 20:20:08.568816 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Oct 2 20:20:08.568821 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Oct 2 20:20:08.568826 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Oct 2 20:20:08.568831 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Oct 2 20:20:08.568836 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Oct 2 20:20:08.568841 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Oct 2 20:20:08.568846 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Oct 2 20:20:08.568851 kernel: x2apic enabled Oct 2 20:20:08.568857 kernel: Switched APIC routing to cluster x2apic. Oct 2 20:20:08.568862 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Oct 2 20:20:08.568867 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Oct 2 20:20:08.568872 kernel: CPU0: Thermal monitoring enabled (TM1) Oct 2 20:20:08.568877 kernel: process: using mwait in idle threads Oct 2 20:20:08.568882 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 20:20:08.568887 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 20:20:08.568892 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 20:20:08.568897 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Oct 2 20:20:08.568902 kernel: Spectre V2 : Mitigation: Enhanced IBRS Oct 2 20:20:08.568907 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 20:20:08.568912 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 2 20:20:08.568917 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 2 20:20:08.568922 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 20:20:08.568927 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 20:20:08.568932 kernel: TAA: Mitigation: TSX disabled Oct 2 20:20:08.568937 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Oct 2 20:20:08.568942 kernel: SRBDS: Mitigation: Microcode Oct 2 20:20:08.568947 kernel: GDS: Vulnerable: No microcode Oct 2 20:20:08.568952 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 20:20:08.568958 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 20:20:08.568963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 20:20:08.568968 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 2 20:20:08.568973 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 2 20:20:08.568978 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 20:20:08.568982 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 2 20:20:08.568987 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 2 20:20:08.568992 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Oct 2 20:20:08.568997 kernel: Freeing SMP alternatives memory: 32K Oct 2 20:20:08.569002 kernel: pid_max: default: 32768 minimum: 301 Oct 2 20:20:08.569007 kernel: LSM: Security Framework initializing Oct 2 20:20:08.569012 kernel: SELinux: Initializing. Oct 2 20:20:08.569017 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 20:20:08.569022 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 20:20:08.569027 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Oct 2 20:20:08.569032 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 2 20:20:08.569037 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Oct 2 20:20:08.569042 kernel: ... version: 4 Oct 2 20:20:08.569047 kernel: ... bit width: 48 Oct 2 20:20:08.569052 kernel: ... generic registers: 4 Oct 2 20:20:08.569057 kernel: ... value mask: 0000ffffffffffff Oct 2 20:20:08.569062 kernel: ... max period: 00007fffffffffff Oct 2 20:20:08.569068 kernel: ... fixed-purpose events: 3 Oct 2 20:20:08.569073 kernel: ... event mask: 000000070000000f Oct 2 20:20:08.569077 kernel: signal: max sigframe size: 2032 Oct 2 20:20:08.569082 kernel: rcu: Hierarchical SRCU implementation. Oct 2 20:20:08.569087 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Oct 2 20:20:08.569092 kernel: smp: Bringing up secondary CPUs ... Oct 2 20:20:08.569097 kernel: x86: Booting SMP configuration: Oct 2 20:20:08.569102 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Oct 2 20:20:08.569108 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 20:20:08.569114 kernel: #9 #10 #11 #12 #13 #14 #15 Oct 2 20:20:08.569118 kernel: smp: Brought up 1 node, 16 CPUs Oct 2 20:20:08.569123 kernel: smpboot: Max logical packages: 1 Oct 2 20:20:08.569128 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Oct 2 20:20:08.569133 kernel: devtmpfs: initialized Oct 2 20:20:08.569138 kernel: x86/mm: Memory block size: 128MB Oct 2 20:20:08.569143 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x825dd000-0x825ddfff] (4096 bytes) Oct 2 20:20:08.569148 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Oct 2 20:20:08.569153 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 20:20:08.569159 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 2 20:20:08.569164 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 20:20:08.569169 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 20:20:08.569174 kernel: audit: initializing netlink subsys (disabled) Oct 2 20:20:08.569179 kernel: audit: type=2000 audit(1696278003.040:1): state=initialized audit_enabled=0 res=1 Oct 2 20:20:08.569184 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 20:20:08.569189 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 20:20:08.569194 kernel: cpuidle: using governor menu Oct 2 20:20:08.569200 kernel: ACPI: bus type PCI registered Oct 2 20:20:08.569204 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 20:20:08.569209 kernel: dca service started, version 1.12.1 Oct 2 20:20:08.569214 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Oct 2 20:20:08.569219 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Oct 2 20:20:08.569224 kernel: PCI: Using configuration type 1 for base access Oct 2 20:20:08.569229 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Oct 2 20:20:08.569234 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 20:20:08.569239 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 20:20:08.569245 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 20:20:08.569250 kernel: ACPI: Added _OSI(Module Device) Oct 2 20:20:08.569255 kernel: ACPI: Added _OSI(Processor Device) Oct 2 20:20:08.569260 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 20:20:08.569265 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 20:20:08.569270 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 20:20:08.569275 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 20:20:08.569280 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 20:20:08.569285 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Oct 2 20:20:08.569290 kernel: ACPI: Dynamic OEM Table Load: Oct 2 20:20:08.569295 kernel: ACPI: SSDT 0xFFFF92A60020EB00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Oct 2 20:20:08.569301 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Oct 2 20:20:08.569306 kernel: ACPI: Dynamic OEM Table Load: Oct 2 20:20:08.569311 kernel: ACPI: SSDT 0xFFFF92A601AE6000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Oct 2 20:20:08.569315 kernel: ACPI: Dynamic OEM Table Load: Oct 2 20:20:08.569320 kernel: ACPI: SSDT 0xFFFF92A601A51000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Oct 2 20:20:08.569325 kernel: ACPI: Dynamic OEM Table Load: Oct 2 20:20:08.569330 kernel: ACPI: SSDT 0xFFFF92A601A56800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Oct 2 20:20:08.569335 kernel: ACPI: Dynamic OEM Table Load: Oct 2 20:20:08.569341 kernel: ACPI: SSDT 0xFFFF92A60014C000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Oct 2 20:20:08.569346 kernel: ACPI: Dynamic OEM Table Load: Oct 2 20:20:08.569351 kernel: ACPI: SSDT 0xFFFF92A601AE5400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Oct 2 20:20:08.569355 kernel: ACPI: Interpreter enabled Oct 2 20:20:08.569360 kernel: ACPI: PM: (supports S0 S5) Oct 2 20:20:08.569365 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 20:20:08.569370 kernel: HEST: Enabling Firmware First mode for corrected errors. Oct 2 20:20:08.569375 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Oct 2 20:20:08.569380 kernel: HEST: Table parsing has been initialized. Oct 2 20:20:08.569386 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Oct 2 20:20:08.569391 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 20:20:08.569396 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Oct 2 20:20:08.569401 kernel: ACPI: PM: Power Resource [USBC] Oct 2 20:20:08.569407 kernel: ACPI: PM: Power Resource [V0PR] Oct 2 20:20:08.569412 kernel: ACPI: PM: Power Resource [V1PR] Oct 2 20:20:08.569418 kernel: ACPI: PM: Power Resource [V2PR] Oct 2 20:20:08.569423 kernel: ACPI: PM: Power Resource [WRST] Oct 2 20:20:08.569427 kernel: ACPI: PM: Power Resource [FN00] Oct 2 20:20:08.569432 kernel: ACPI: PM: Power Resource [FN01] Oct 2 20:20:08.569438 kernel: ACPI: PM: Power Resource [FN02] Oct 2 20:20:08.569443 kernel: ACPI: PM: Power Resource [FN03] Oct 2 20:20:08.569448 kernel: ACPI: PM: Power Resource [FN04] Oct 2 20:20:08.569453 kernel: ACPI: PM: Power Resource [PIN] Oct 2 20:20:08.569458 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Oct 2 20:20:08.569521 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 20:20:08.569565 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Oct 2 20:20:08.569609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Oct 2 20:20:08.569616 kernel: PCI host bridge to bus 0000:00 Oct 2 20:20:08.569658 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 20:20:08.569695 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 20:20:08.569732 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 20:20:08.569769 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Oct 2 20:20:08.569805 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Oct 2 20:20:08.569842 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Oct 2 20:20:08.569894 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Oct 2 20:20:08.569944 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Oct 2 20:20:08.569990 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.570034 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Oct 2 20:20:08.570078 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Oct 2 20:20:08.570125 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Oct 2 20:20:08.570167 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Oct 2 20:20:08.570215 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Oct 2 20:20:08.570257 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Oct 2 20:20:08.570299 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Oct 2 20:20:08.570343 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Oct 2 20:20:08.570387 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Oct 2 20:20:08.570431 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Oct 2 20:20:08.570476 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Oct 2 20:20:08.570518 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Oct 2 20:20:08.570566 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Oct 2 20:20:08.570608 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Oct 2 20:20:08.570653 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Oct 2 20:20:08.570696 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Oct 2 20:20:08.570737 kernel: pci 0000:00:16.0: PME# supported from D3hot Oct 2 20:20:08.570781 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Oct 2 20:20:08.570823 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Oct 2 20:20:08.570862 kernel: pci 0000:00:16.1: PME# supported from D3hot Oct 2 20:20:08.570906 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Oct 2 20:20:08.570950 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Oct 2 20:20:08.570991 kernel: pci 0000:00:16.4: PME# supported from D3hot Oct 2 20:20:08.571035 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Oct 2 20:20:08.571077 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Oct 2 20:20:08.571117 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Oct 2 20:20:08.571157 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Oct 2 20:20:08.571198 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Oct 2 20:20:08.571244 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Oct 2 20:20:08.571287 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Oct 2 20:20:08.571328 kernel: pci 0000:00:17.0: PME# supported from D3hot Oct 2 20:20:08.571374 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Oct 2 20:20:08.571418 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.571467 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Oct 2 20:20:08.571508 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.571557 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Oct 2 20:20:08.571599 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.571645 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Oct 2 20:20:08.571687 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.571733 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Oct 2 20:20:08.571777 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.571822 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Oct 2 20:20:08.571863 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Oct 2 20:20:08.571910 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Oct 2 20:20:08.571956 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Oct 2 20:20:08.571997 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Oct 2 20:20:08.572038 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Oct 2 20:20:08.572087 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Oct 2 20:20:08.572129 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Oct 2 20:20:08.572176 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Oct 2 20:20:08.572219 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Oct 2 20:20:08.572264 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Oct 2 20:20:08.572306 kernel: pci 0000:01:00.0: PME# supported from D3cold Oct 2 20:20:08.572348 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Oct 2 20:20:08.572391 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Oct 2 20:20:08.572443 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Oct 2 20:20:08.572487 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Oct 2 20:20:08.572531 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Oct 2 20:20:08.572574 kernel: pci 0000:01:00.1: PME# supported from D3cold Oct 2 20:20:08.572616 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Oct 2 20:20:08.572660 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Oct 2 20:20:08.572703 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 20:20:08.572745 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Oct 2 20:20:08.572786 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Oct 2 20:20:08.572827 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Oct 2 20:20:08.572877 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Oct 2 20:20:08.572920 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Oct 2 20:20:08.572963 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Oct 2 20:20:08.573006 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Oct 2 20:20:08.573049 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.573091 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Oct 2 20:20:08.573132 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Oct 2 20:20:08.573173 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Oct 2 20:20:08.573222 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Oct 2 20:20:08.573266 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Oct 2 20:20:08.573309 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Oct 2 20:20:08.573352 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Oct 2 20:20:08.573394 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Oct 2 20:20:08.573439 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Oct 2 20:20:08.573480 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Oct 2 20:20:08.573526 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Oct 2 20:20:08.573567 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Oct 2 20:20:08.573614 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Oct 2 20:20:08.573659 kernel: pci 0000:06:00.0: enabling Extended Tags Oct 2 20:20:08.573703 kernel: pci 0000:06:00.0: supports D1 D2 Oct 2 20:20:08.573746 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 20:20:08.573791 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Oct 2 20:20:08.573832 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Oct 2 20:20:08.573877 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Oct 2 20:20:08.573925 kernel: pci_bus 0000:07: extended config space not accessible Oct 2 20:20:08.573975 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Oct 2 20:20:08.574021 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Oct 2 20:20:08.574066 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Oct 2 20:20:08.574111 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Oct 2 20:20:08.574157 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 20:20:08.574260 kernel: pci 0000:07:00.0: supports D1 D2 Oct 2 20:20:08.574305 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 20:20:08.574350 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Oct 2 20:20:08.574393 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Oct 2 20:20:08.574439 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Oct 2 20:20:08.574447 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Oct 2 20:20:08.574453 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Oct 2 20:20:08.574458 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Oct 2 20:20:08.574465 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Oct 2 20:20:08.574470 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Oct 2 20:20:08.574476 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Oct 2 20:20:08.574481 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Oct 2 20:20:08.574486 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Oct 2 20:20:08.574492 kernel: iommu: Default domain type: Translated Oct 2 20:20:08.574497 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 20:20:08.574540 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Oct 2 20:20:08.574585 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 20:20:08.574632 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Oct 2 20:20:08.574639 kernel: vgaarb: loaded Oct 2 20:20:08.574645 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 20:20:08.574650 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 20:20:08.574656 kernel: PTP clock support registered Oct 2 20:20:08.574661 kernel: PCI: Using ACPI for IRQ routing Oct 2 20:20:08.574667 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 20:20:08.574672 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Oct 2 20:20:08.574678 kernel: e820: reserve RAM buffer [mem 0x825dd000-0x83ffffff] Oct 2 20:20:08.574684 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Oct 2 20:20:08.574689 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Oct 2 20:20:08.574694 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Oct 2 20:20:08.574699 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Oct 2 20:20:08.574704 kernel: clocksource: Switched to clocksource tsc-early Oct 2 20:20:08.574710 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 20:20:08.574715 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 20:20:08.574720 kernel: pnp: PnP ACPI init Oct 2 20:20:08.574765 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Oct 2 20:20:08.574807 kernel: pnp 00:02: [dma 0 disabled] Oct 2 20:20:08.574847 kernel: pnp 00:03: [dma 0 disabled] Oct 2 20:20:08.574889 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Oct 2 20:20:08.574926 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Oct 2 20:20:08.574967 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Oct 2 20:20:08.575009 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Oct 2 20:20:08.575047 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Oct 2 20:20:08.575083 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Oct 2 20:20:08.575120 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Oct 2 20:20:08.575157 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Oct 2 20:20:08.575194 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Oct 2 20:20:08.575230 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Oct 2 20:20:08.575268 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Oct 2 20:20:08.575309 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Oct 2 20:20:08.575345 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Oct 2 20:20:08.575383 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Oct 2 20:20:08.575421 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Oct 2 20:20:08.575459 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Oct 2 20:20:08.575495 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Oct 2 20:20:08.575533 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Oct 2 20:20:08.575574 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Oct 2 20:20:08.575582 kernel: pnp: PnP ACPI: found 10 devices Oct 2 20:20:08.575587 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 20:20:08.575594 kernel: NET: Registered PF_INET protocol family Oct 2 20:20:08.575599 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 20:20:08.575605 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 20:20:08.575610 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 20:20:08.575616 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 20:20:08.575622 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 2 20:20:08.575627 kernel: TCP: Hash tables configured (established 262144 bind 65536) Oct 2 20:20:08.575633 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 2 20:20:08.575638 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 2 20:20:08.575643 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 20:20:08.575649 kernel: NET: Registered PF_XDP protocol family Oct 2 20:20:08.575690 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Oct 2 20:20:08.575733 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Oct 2 20:20:08.575776 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Oct 2 20:20:08.575821 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Oct 2 20:20:08.575864 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Oct 2 20:20:08.575907 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Oct 2 20:20:08.575949 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Oct 2 20:20:08.575991 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 20:20:08.576032 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Oct 2 20:20:08.576076 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Oct 2 20:20:08.576118 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Oct 2 20:20:08.576159 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Oct 2 20:20:08.576201 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Oct 2 20:20:08.576242 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Oct 2 20:20:08.576286 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Oct 2 20:20:08.576327 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Oct 2 20:20:08.576369 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Oct 2 20:20:08.576413 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Oct 2 20:20:08.576456 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Oct 2 20:20:08.576499 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Oct 2 20:20:08.576542 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Oct 2 20:20:08.576584 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Oct 2 20:20:08.576626 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Oct 2 20:20:08.576670 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Oct 2 20:20:08.576708 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Oct 2 20:20:08.576744 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 20:20:08.576781 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 20:20:08.576817 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 20:20:08.576854 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Oct 2 20:20:08.576889 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Oct 2 20:20:08.576932 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Oct 2 20:20:08.576973 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Oct 2 20:20:08.577017 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Oct 2 20:20:08.577057 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Oct 2 20:20:08.577099 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 2 20:20:08.577139 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Oct 2 20:20:08.577181 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Oct 2 20:20:08.577221 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Oct 2 20:20:08.577262 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Oct 2 20:20:08.577302 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Oct 2 20:20:08.577310 kernel: PCI: CLS 64 bytes, default 64 Oct 2 20:20:08.577316 kernel: DMAR: No ATSR found Oct 2 20:20:08.577321 kernel: DMAR: No SATC found Oct 2 20:20:08.577326 kernel: DMAR: dmar0: Using Queued invalidation Oct 2 20:20:08.577368 kernel: pci 0000:00:00.0: Adding to iommu group 0 Oct 2 20:20:08.577414 kernel: pci 0000:00:01.0: Adding to iommu group 1 Oct 2 20:20:08.577458 kernel: pci 0000:00:08.0: Adding to iommu group 2 Oct 2 20:20:08.577499 kernel: pci 0000:00:12.0: Adding to iommu group 3 Oct 2 20:20:08.577543 kernel: pci 0000:00:14.0: Adding to iommu group 4 Oct 2 20:20:08.577585 kernel: pci 0000:00:14.2: Adding to iommu group 4 Oct 2 20:20:08.577626 kernel: pci 0000:00:15.0: Adding to iommu group 5 Oct 2 20:20:08.577668 kernel: pci 0000:00:15.1: Adding to iommu group 5 Oct 2 20:20:08.577709 kernel: pci 0000:00:16.0: Adding to iommu group 6 Oct 2 20:20:08.577753 kernel: pci 0000:00:16.1: Adding to iommu group 6 Oct 2 20:20:08.577794 kernel: pci 0000:00:16.4: Adding to iommu group 6 Oct 2 20:20:08.577835 kernel: pci 0000:00:17.0: Adding to iommu group 7 Oct 2 20:20:08.577877 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Oct 2 20:20:08.577919 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Oct 2 20:20:08.577960 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Oct 2 20:20:08.578002 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Oct 2 20:20:08.578044 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Oct 2 20:20:08.578087 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Oct 2 20:20:08.578129 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Oct 2 20:20:08.578170 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Oct 2 20:20:08.578211 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Oct 2 20:20:08.578254 kernel: pci 0000:01:00.0: Adding to iommu group 1 Oct 2 20:20:08.578297 kernel: pci 0000:01:00.1: Adding to iommu group 1 Oct 2 20:20:08.578341 kernel: pci 0000:03:00.0: Adding to iommu group 15 Oct 2 20:20:08.578384 kernel: pci 0000:04:00.0: Adding to iommu group 16 Oct 2 20:20:08.578431 kernel: pci 0000:06:00.0: Adding to iommu group 17 Oct 2 20:20:08.578476 kernel: pci 0000:07:00.0: Adding to iommu group 17 Oct 2 20:20:08.578484 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Oct 2 20:20:08.578490 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 2 20:20:08.578495 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Oct 2 20:20:08.578501 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Oct 2 20:20:08.578506 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Oct 2 20:20:08.578511 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Oct 2 20:20:08.578518 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Oct 2 20:20:08.578562 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Oct 2 20:20:08.578570 kernel: Initialise system trusted keyrings Oct 2 20:20:08.578575 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Oct 2 20:20:08.578581 kernel: Key type asymmetric registered Oct 2 20:20:08.578586 kernel: Asymmetric key parser 'x509' registered Oct 2 20:20:08.578591 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 20:20:08.578597 kernel: io scheduler mq-deadline registered Oct 2 20:20:08.578603 kernel: io scheduler kyber registered Oct 2 20:20:08.578609 kernel: io scheduler bfq registered Oct 2 20:20:08.578649 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Oct 2 20:20:08.578692 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Oct 2 20:20:08.578734 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Oct 2 20:20:08.578776 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Oct 2 20:20:08.578817 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Oct 2 20:20:08.578860 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Oct 2 20:20:08.578909 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Oct 2 20:20:08.578917 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Oct 2 20:20:08.578922 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Oct 2 20:20:08.578928 kernel: pstore: Registered erst as persistent store backend Oct 2 20:20:08.578933 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 20:20:08.578938 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 20:20:08.578944 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 20:20:08.578949 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 2 20:20:08.578956 kernel: hpet_acpi_add: no address or irqs in _CRS Oct 2 20:20:08.579001 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Oct 2 20:20:08.579009 kernel: i8042: PNP: No PS/2 controller found. Oct 2 20:20:08.579047 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Oct 2 20:20:08.579085 kernel: rtc_cmos rtc_cmos: registered as rtc0 Oct 2 20:20:08.579123 kernel: rtc_cmos rtc_cmos: setting system clock to 2023-10-02T20:20:07 UTC (1696278007) Oct 2 20:20:08.579160 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Oct 2 20:20:08.579168 kernel: fail to initialize ptp_kvm Oct 2 20:20:08.579174 kernel: intel_pstate: Intel P-state driver initializing Oct 2 20:20:08.579180 kernel: intel_pstate: Disabling energy efficiency optimization Oct 2 20:20:08.579185 kernel: intel_pstate: HWP enabled Oct 2 20:20:08.579191 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Oct 2 20:20:08.579196 kernel: vesafb: scrolling: redraw Oct 2 20:20:08.579202 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Oct 2 20:20:08.579207 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000005d4eb9a7, using 768k, total 768k Oct 2 20:20:08.579213 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 20:20:08.579218 kernel: fb0: VESA VGA frame buffer device Oct 2 20:20:08.579224 kernel: NET: Registered PF_INET6 protocol family Oct 2 20:20:08.579229 kernel: Segment Routing with IPv6 Oct 2 20:20:08.579235 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 20:20:08.579240 kernel: NET: Registered PF_PACKET protocol family Oct 2 20:20:08.579245 kernel: Key type dns_resolver registered Oct 2 20:20:08.579251 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Oct 2 20:20:08.579256 kernel: microcode: Microcode Update Driver: v2.2. Oct 2 20:20:08.579261 kernel: IPI shorthand broadcast: enabled Oct 2 20:20:08.579267 kernel: sched_clock: Marking stable (1679399196, 1334986334)->(4431864053, -1417478523) Oct 2 20:20:08.579273 kernel: registered taskstats version 1 Oct 2 20:20:08.579278 kernel: Loading compiled-in X.509 certificates Oct 2 20:20:08.579284 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 20:20:08.579289 kernel: Key type .fscrypt registered Oct 2 20:20:08.579294 kernel: Key type fscrypt-provisioning registered Oct 2 20:20:08.579299 kernel: pstore: Using crash dump compression: deflate Oct 2 20:20:08.579305 kernel: ima: Allocated hash algorithm: sha1 Oct 2 20:20:08.579310 kernel: ima: No architecture policies found Oct 2 20:20:08.579316 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 20:20:08.579322 kernel: Write protecting the kernel read-only data: 28672k Oct 2 20:20:08.579327 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 20:20:08.579332 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 20:20:08.579338 kernel: Run /init as init process Oct 2 20:20:08.579343 kernel: with arguments: Oct 2 20:20:08.579349 kernel: /init Oct 2 20:20:08.579354 kernel: with environment: Oct 2 20:20:08.579359 kernel: HOME=/ Oct 2 20:20:08.579364 kernel: TERM=linux Oct 2 20:20:08.579370 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 20:20:08.579376 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:20:08.579383 systemd[1]: Detected architecture x86-64. Oct 2 20:20:08.579389 systemd[1]: Running in initrd. Oct 2 20:20:08.579394 systemd[1]: No hostname configured, using default hostname. Oct 2 20:20:08.579400 systemd[1]: Hostname set to . Oct 2 20:20:08.579407 systemd[1]: Initializing machine ID from random generator. Oct 2 20:20:08.579414 systemd[1]: Queued start job for default target initrd.target. Oct 2 20:20:08.579420 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:20:08.579425 systemd[1]: Reached target cryptsetup.target. Oct 2 20:20:08.579431 systemd[1]: Reached target ignition-diskful-subsequent.target. Oct 2 20:20:08.579437 systemd[1]: Reached target paths.target. Oct 2 20:20:08.579442 systemd[1]: Reached target slices.target. Oct 2 20:20:08.579447 systemd[1]: Reached target swap.target. Oct 2 20:20:08.579453 systemd[1]: Reached target timers.target. Oct 2 20:20:08.579459 systemd[1]: Listening on iscsid.socket. Oct 2 20:20:08.579465 systemd[1]: Listening on iscsiuio.socket. Oct 2 20:20:08.579471 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 20:20:08.579476 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 20:20:08.579482 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Oct 2 20:20:08.579487 systemd[1]: Listening on systemd-journald.socket. Oct 2 20:20:08.579493 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Oct 2 20:20:08.579498 kernel: clocksource: Switched to clocksource tsc Oct 2 20:20:08.579504 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:20:08.579510 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:20:08.579516 systemd[1]: Reached target sockets.target. Oct 2 20:20:08.579521 systemd[1]: Starting iscsiuio.service... Oct 2 20:20:08.579527 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:20:08.579532 kernel: SCSI subsystem initialized Oct 2 20:20:08.579537 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 20:20:08.579543 kernel: Loading iSCSI transport class v2.0-870. Oct 2 20:20:08.579548 systemd[1]: Starting systemd-journald.service... Oct 2 20:20:08.579555 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:20:08.579562 systemd-journald[266]: Journal started Oct 2 20:20:08.579588 systemd-journald[266]: Runtime Journal (/run/log/journal/c558b2811863431c919efe30429a2a3d) is 8.0M, max 640.1M, 632.1M free. Oct 2 20:20:08.582992 systemd-modules-load[267]: Inserted module 'overlay' Oct 2 20:20:08.606816 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 20:20:08.640436 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 20:20:08.640452 systemd[1]: Started iscsiuio.service. Oct 2 20:20:08.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.666460 kernel: Bridge firewalling registered Oct 2 20:20:08.666475 kernel: audit: type=1130 audit(1696278008.665:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.666483 systemd[1]: Started systemd-journald.service. Oct 2 20:20:08.725368 systemd-modules-load[267]: Inserted module 'br_netfilter' Oct 2 20:20:08.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.725647 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:20:08.889618 kernel: audit: type=1130 audit(1696278008.725:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.889668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 20:20:08.889697 kernel: audit: type=1130 audit(1696278008.787:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.889723 kernel: device-mapper: uevent: version 1.0.3 Oct 2 20:20:08.889748 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 20:20:08.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.787565 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 20:20:08.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.887046 systemd-modules-load[267]: Inserted module 'dm_multipath' Oct 2 20:20:08.994438 kernel: audit: type=1130 audit(1696278008.897:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.994450 kernel: audit: type=1130 audit(1696278008.949:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.897838 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:20:09.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:08.949673 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 20:20:09.002986 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 20:20:09.049129 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:20:09.049487 kernel: audit: type=1130 audit(1696278009.002:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.049462 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:20:09.052174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:20:09.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.052722 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:20:09.102631 kernel: audit: type=1130 audit(1696278009.051:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.114750 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 20:20:09.220456 kernel: audit: type=1130 audit(1696278009.114:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.220473 kernel: audit: type=1130 audit(1696278009.170:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.171044 systemd[1]: Starting dracut-cmdline.service... Oct 2 20:20:09.251520 kernel: iscsi: registered transport (tcp) Oct 2 20:20:09.251531 dracut-cmdline[289]: dracut-dracut-053 Oct 2 20:20:09.251531 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Oct 2 20:20:09.251531 dracut-cmdline[289]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:20:09.352452 kernel: iscsi: registered transport (qla4xxx) Oct 2 20:20:09.352469 kernel: QLogic iSCSI HBA Driver Oct 2 20:20:09.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.312483 systemd[1]: Finished dracut-cmdline.service. Oct 2 20:20:09.334089 systemd[1]: Starting dracut-pre-udev.service... Oct 2 20:20:09.399513 kernel: raid6: avx2x4 gen() 36099 MB/s Oct 2 20:20:09.359882 systemd[1]: Starting iscsid.service... Oct 2 20:20:09.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.377864 systemd[1]: Started iscsid.service. Oct 2 20:20:09.436514 kernel: raid6: avx2x4 xor() 18762 MB/s Oct 2 20:20:09.436525 iscsid[451]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:20:09.436525 iscsid[451]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 2 20:20:09.436525 iscsid[451]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 20:20:09.436525 iscsid[451]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 20:20:09.436525 iscsid[451]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 20:20:09.436525 iscsid[451]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:20:09.436525 iscsid[451]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 20:20:09.602505 kernel: raid6: avx2x2 gen() 54046 MB/s Oct 2 20:20:09.602517 kernel: raid6: avx2x2 xor() 31999 MB/s Oct 2 20:20:09.602524 kernel: raid6: avx2x1 gen() 44897 MB/s Oct 2 20:20:09.602530 kernel: raid6: avx2x1 xor() 27410 MB/s Oct 2 20:20:09.602536 kernel: raid6: sse2x4 gen() 20988 MB/s Oct 2 20:20:09.645469 kernel: raid6: sse2x4 xor() 11551 MB/s Oct 2 20:20:09.680438 kernel: raid6: sse2x2 gen() 21238 MB/s Oct 2 20:20:09.715440 kernel: raid6: sse2x2 xor() 13179 MB/s Oct 2 20:20:09.750472 kernel: raid6: sse2x1 gen() 17961 MB/s Oct 2 20:20:09.802300 kernel: raid6: sse2x1 xor() 8745 MB/s Oct 2 20:20:09.802315 kernel: raid6: using algorithm avx2x2 gen() 54046 MB/s Oct 2 20:20:09.802323 kernel: raid6: .... xor() 31999 MB/s, rmw enabled Oct 2 20:20:09.820831 kernel: raid6: using avx2x2 recovery algorithm Oct 2 20:20:09.868483 kernel: xor: automatically using best checksumming function avx Oct 2 20:20:09.947442 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 20:20:09.952408 systemd[1]: Finished dracut-pre-udev.service. Oct 2 20:20:09.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.961000 audit: BPF prog-id=6 op=LOAD Oct 2 20:20:09.961000 audit: BPF prog-id=7 op=LOAD Oct 2 20:20:09.962300 systemd[1]: Starting systemd-udevd.service... Oct 2 20:20:09.970203 systemd-udevd[467]: Using default interface naming scheme 'v252'. Oct 2 20:20:09.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:09.977790 systemd[1]: Started systemd-udevd.service. Oct 2 20:20:10.018566 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Oct 2 20:20:09.995133 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 20:20:10.026412 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 20:20:10.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:10.043321 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:20:10.090976 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:20:10.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:10.105127 systemd[1]: Starting dracut-initqueue.service... Oct 2 20:20:10.151519 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 20:20:10.151533 kernel: libata version 3.00 loaded. Oct 2 20:20:10.151545 kernel: ACPI: bus type USB registered Oct 2 20:20:10.151551 kernel: usbcore: registered new interface driver usbfs Oct 2 20:20:10.175152 kernel: usbcore: registered new interface driver hub Oct 2 20:20:10.175173 kernel: usbcore: registered new device driver usb Oct 2 20:20:10.198415 kernel: ahci 0000:00:17.0: version 3.0 Oct 2 20:20:10.217441 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 20:20:10.217477 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Oct 2 20:20:10.259171 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Oct 2 20:20:10.259262 kernel: AES CTR mode by8 optimization enabled Oct 2 20:20:10.276409 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Oct 2 20:20:10.294424 kernel: scsi host0: ahci Oct 2 20:20:10.294498 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Oct 2 20:20:10.338888 kernel: scsi host1: ahci Oct 2 20:20:10.339070 kernel: pps pps0: new PPS source ptp0 Oct 2 20:20:10.339216 kernel: scsi host2: ahci Oct 2 20:20:10.367016 kernel: igb 0000:03:00.0: added PHC on eth0 Oct 2 20:20:10.367194 kernel: scsi host3: ahci Oct 2 20:20:10.394716 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Oct 2 20:20:10.411459 kernel: scsi host4: ahci Oct 2 20:20:10.411478 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:54 Oct 2 20:20:10.440380 kernel: scsi host5: ahci Oct 2 20:20:10.452410 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Oct 2 20:20:10.452562 kernel: scsi host6: ahci Oct 2 20:20:10.477539 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Oct 2 20:20:10.477697 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Oct 2 20:20:10.524671 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Oct 2 20:20:10.524714 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Oct 2 20:20:10.539480 kernel: pps pps1: new PPS source ptp1 Oct 2 20:20:10.539667 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Oct 2 20:20:10.565487 kernel: igb 0000:04:00.0: added PHC on eth1 Oct 2 20:20:10.565677 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Oct 2 20:20:10.592649 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Oct 2 20:20:10.592813 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Oct 2 20:20:10.623036 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:55 Oct 2 20:20:10.623186 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Oct 2 20:20:10.654558 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Oct 2 20:20:10.684899 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Oct 2 20:20:10.720665 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Oct 2 20:20:10.720737 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Oct 2 20:20:10.974414 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 2 20:20:10.974434 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 2 20:20:10.989414 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Oct 2 20:20:10.989563 kernel: ata7: SATA link down (SStatus 0 SControl 300) Oct 2 20:20:11.024436 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Oct 2 20:20:11.040434 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Oct 2 20:20:11.040518 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 2 20:20:11.074434 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Oct 2 20:20:11.090405 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 2 20:20:11.106440 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Oct 2 20:20:11.123445 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Oct 2 20:20:11.175119 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Oct 2 20:20:11.175158 kernel: ata1.00: Features: NCQ-prio Oct 2 20:20:11.175166 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Oct 2 20:20:11.206184 kernel: ata2.00: Features: NCQ-prio Oct 2 20:20:11.225471 kernel: ata1.00: configured for UDMA/133 Oct 2 20:20:11.225510 kernel: ata2.00: configured for UDMA/133 Oct 2 20:20:11.225517 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Oct 2 20:20:11.257445 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Oct 2 20:20:11.296439 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Oct 2 20:20:11.296657 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Oct 2 20:20:11.344837 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Oct 2 20:20:11.344970 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Oct 2 20:20:11.382066 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Oct 2 20:20:11.382179 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Oct 2 20:20:11.382231 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Oct 2 20:20:11.399990 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Oct 2 20:20:11.431443 kernel: hub 1-0:1.0: USB hub found Oct 2 20:20:11.431523 kernel: hub 1-0:1.0: 16 ports detected Oct 2 20:20:11.447452 kernel: ata1.00: Enabling discard_zeroes_data Oct 2 20:20:11.447468 kernel: hub 2-0:1.0: USB hub found Oct 2 20:20:11.447541 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Oct 2 20:20:11.460356 kernel: ata2.00: Enabling discard_zeroes_data Oct 2 20:20:11.460372 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Oct 2 20:20:11.460480 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Oct 2 20:20:11.460541 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Oct 2 20:20:11.460597 kernel: sd 1:0:0:0: [sdb] Write Protect is off Oct 2 20:20:11.460683 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Oct 2 20:20:11.460751 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 2 20:20:11.460806 kernel: ata2.00: Enabling discard_zeroes_data Oct 2 20:20:11.474471 kernel: ata2.00: Enabling discard_zeroes_data Oct 2 20:20:11.474485 kernel: hub 2-0:1.0: 10 ports detected Oct 2 20:20:11.474553 kernel: usb: port power management may be unreliable Oct 2 20:20:11.503144 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 2 20:20:11.503218 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Oct 2 20:20:11.538158 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 2 20:20:11.567643 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Oct 2 20:20:11.567723 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Oct 2 20:20:11.567782 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Oct 2 20:20:11.600839 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 2 20:20:11.717442 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Oct 2 20:20:11.717490 kernel: ata1.00: Enabling discard_zeroes_data Oct 2 20:20:11.794581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:20:11.794596 kernel: ata1.00: Enabling discard_zeroes_data Oct 2 20:20:11.794604 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 2 20:20:11.833820 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 20:20:11.882549 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by (udev-worker) (513) Oct 2 20:20:11.882564 kernel: hub 1-14:1.0: USB hub found Oct 2 20:20:11.882644 kernel: hub 1-14:1.0: 4 ports detected Oct 2 20:20:11.882701 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Oct 2 20:20:11.852521 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 20:20:11.950495 kernel: port_module: 9 callbacks suppressed Oct 2 20:20:11.950507 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Oct 2 20:20:11.858794 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 20:20:11.927353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:20:11.996488 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Oct 2 20:20:11.967501 systemd[1]: Reached target initrd-root-device.target. Oct 2 20:20:11.988950 systemd[1]: Starting disk-uuid.service... Oct 2 20:20:12.097559 kernel: audit: type=1130 audit(1696278012.018:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.097570 kernel: audit: type=1131 audit(1696278012.018:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.003690 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 20:20:12.003737 systemd[1]: Finished disk-uuid.service. Oct 2 20:20:12.038058 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:20:12.105632 systemd[1]: Reached target local-fs.target. Oct 2 20:20:12.105740 systemd[1]: Reached target sysinit.target. Oct 2 20:20:12.226225 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 20:20:12.226238 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Oct 2 20:20:12.226257 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Oct 2 20:20:12.226325 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Oct 2 20:20:12.126524 systemd[1]: Reached target basic.target. Oct 2 20:20:12.259472 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Oct 2 20:20:12.141052 systemd[1]: Starting verity-setup.service... Oct 2 20:20:12.216817 systemd[1]: Found device dev-mapper-usr.device. Oct 2 20:20:12.318443 kernel: audit: type=1130 audit(1696278012.276:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.245719 systemd[1]: Finished verity-setup.service. Oct 2 20:20:12.278036 systemd[1]: Mounting sysusr-usr.mount... Oct 2 20:20:12.416392 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 20:20:12.416411 kernel: usbcore: registered new interface driver usbhid Oct 2 20:20:12.416419 kernel: audit: type=1130 audit(1696278012.354:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.416426 kernel: usbhid: USB HID core driver Oct 2 20:20:12.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.331563 systemd[1]: Finished dracut-initqueue.service. Oct 2 20:20:12.480510 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Oct 2 20:20:12.480527 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 20:20:12.354599 systemd[1]: Reached target remote-fs-pre.target. Oct 2 20:20:12.440509 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:20:12.473507 systemd[1]: Reached target remote-fs.target. Oct 2 20:20:12.625951 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Oct 2 20:20:12.626062 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Oct 2 20:20:12.626073 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Oct 2 20:20:12.495044 systemd[1]: Starting dracut-pre-mount.service... Oct 2 20:20:12.528797 systemd[1]: Mounted sysusr-usr.mount. Oct 2 20:20:12.707663 kernel: audit: type=1130 audit(1696278012.649:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.633760 systemd[1]: Finished dracut-pre-mount.service. Oct 2 20:20:12.650429 systemd[1]: Starting systemd-fsck-root.service... Oct 2 20:20:12.726013 systemd-fsck[712]: ROOT: clean, 631/553520 files, 107055/553472 blocks Oct 2 20:20:12.741276 systemd[1]: Finished systemd-fsck-root.service. Oct 2 20:20:12.828127 kernel: audit: type=1130 audit(1696278012.749:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.828141 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 20:20:12.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:12.751326 systemd[1]: Mounting sysroot.mount... Oct 2 20:20:12.836069 systemd[1]: Mounted sysroot.mount. Oct 2 20:20:12.849694 systemd[1]: Reached target initrd-root-fs.target. Oct 2 20:20:12.857340 systemd[1]: Mounting sysroot-usr.mount... Oct 2 20:20:12.879262 systemd[1]: Mounted sysroot-usr.mount. Oct 2 20:20:12.888101 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:20:12.989903 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:20:12.989924 kernel: BTRFS info (device sda6): using free space tree Oct 2 20:20:12.989932 kernel: BTRFS info (device sda6): has skinny extents Oct 2 20:20:12.989938 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 20:20:12.913098 systemd[1]: Starting initrd-setup-root.service... Oct 2 20:20:12.997726 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:20:13.015748 systemd[1]: Finished initrd-setup-root.service. Oct 2 20:20:13.098634 kernel: audit: type=1130 audit(1696278013.031:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.033481 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 20:20:13.107729 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 20:20:13.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.182482 kernel: audit: type=1130 audit(1696278013.127:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.182492 initrd-setup-root-after-ignition[797]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 20:20:13.127547 systemd[1]: Reached target ignition-subsequent.target. Oct 2 20:20:13.190929 systemd[1]: Starting initrd-parse-etc.service... Oct 2 20:20:13.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.283460 kernel: audit: type=1130 audit(1696278013.228:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.218607 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 20:20:13.218661 systemd[1]: Finished initrd-parse-etc.service. Oct 2 20:20:13.228683 systemd[1]: Reached target initrd-fs.target. Oct 2 20:20:13.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.291643 systemd[1]: Reached target initrd.target. Oct 2 20:20:13.291700 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 20:20:13.292038 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 20:20:13.313781 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 20:20:13.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.330267 systemd[1]: Starting initrd-cleanup.service... Oct 2 20:20:13.348477 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 20:20:13.360720 systemd[1]: Stopped target timers.target. Oct 2 20:20:13.379013 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 20:20:13.379320 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 20:20:13.396334 systemd[1]: Stopped target initrd.target. Oct 2 20:20:13.410052 systemd[1]: Stopped target basic.target. Oct 2 20:20:13.424064 systemd[1]: Stopped target ignition-subsequent.target. Oct 2 20:20:13.439961 systemd[1]: Stopped target ignition-diskful-subsequent.target. Oct 2 20:20:13.456969 systemd[1]: Stopped target initrd-root-device.target. Oct 2 20:20:13.474054 systemd[1]: Stopped target paths.target. Oct 2 20:20:13.488060 systemd[1]: Stopped target remote-fs.target. Oct 2 20:20:13.502958 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 20:20:13.517950 systemd[1]: Stopped target slices.target. Oct 2 20:20:13.533042 systemd[1]: Stopped target sockets.target. Oct 2 20:20:13.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.549966 systemd[1]: Stopped target sysinit.target. Oct 2 20:20:13.565971 systemd[1]: Stopped target local-fs.target. Oct 2 20:20:13.580952 systemd[1]: Stopped target local-fs-pre.target. Oct 2 20:20:13.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.595956 systemd[1]: Stopped target swap.target. Oct 2 20:20:13.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.611995 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 20:20:13.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.712766 iscsid[451]: iscsid shutting down. Oct 2 20:20:13.612328 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 20:20:13.627258 systemd[1]: Stopped target cryptsetup.target. Oct 2 20:20:13.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.641768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 20:20:13.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.645645 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 20:20:13.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.656824 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 20:20:13.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.657145 systemd[1]: Stopped dracut-initqueue.service. Oct 2 20:20:13.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.672074 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 20:20:13.672398 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 20:20:13.689046 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 20:20:13.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.689352 systemd[1]: Stopped initrd-setup-root.service. Oct 2 20:20:13.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.704526 systemd[1]: Stopping iscsid.service... Oct 2 20:20:13.719595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 20:20:13.719670 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 20:20:13.740745 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 20:20:13.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.740846 systemd[1]: Stopped systemd-sysctl.service. Oct 2 20:20:13.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.755860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 20:20:13.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.755999 systemd[1]: Stopped systemd-modules-load.service. Oct 2 20:20:13.774005 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 20:20:13.774288 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 20:20:14.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.791042 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 20:20:14.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.791349 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 20:20:14.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:14.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.808535 systemd[1]: Stopping systemd-udevd.service... Oct 2 20:20:14.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:14.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:13.822074 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 20:20:13.822474 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 20:20:13.822519 systemd[1]: Stopped iscsid.service. Oct 2 20:20:13.845980 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 20:20:13.846090 systemd[1]: Stopped systemd-udevd.service. Oct 2 20:20:13.865958 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 20:20:13.866053 systemd[1]: Closed iscsid.socket. Oct 2 20:20:13.879724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 20:20:13.879818 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 20:20:13.895783 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 20:20:13.895871 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 20:20:13.913829 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 20:20:13.913966 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 20:20:13.928900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 20:20:13.929033 systemd[1]: Stopped dracut-cmdline.service. Oct 2 20:20:13.945902 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 20:20:13.946039 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 20:20:13.963611 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 20:20:13.979851 systemd[1]: Stopping iscsiuio.service... Oct 2 20:20:13.994605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 20:20:13.994826 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 20:20:14.013653 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 20:20:14.163427 systemd-journald[266]: Received SIGTERM from PID 1 (n/a). Oct 2 20:20:14.013877 systemd[1]: Stopped iscsiuio.service. Oct 2 20:20:14.028304 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 20:20:14.028520 systemd[1]: Finished initrd-cleanup.service. Oct 2 20:20:14.046211 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 20:20:14.046419 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 20:20:14.062831 systemd[1]: Reached target initrd-switch-root.target. Oct 2 20:20:14.075667 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 20:20:14.075766 systemd[1]: Closed iscsiuio.socket. Oct 2 20:20:14.096178 systemd[1]: Starting initrd-switch-root.service... Oct 2 20:20:14.118232 systemd[1]: Switching root. Oct 2 20:20:14.163620 systemd-journald[266]: Journal stopped Oct 2 20:20:18.230903 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 20:20:18.230916 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 20:20:18.230924 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 20:20:18.230929 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 20:20:18.230934 kernel: SELinux: policy capability open_perms=1 Oct 2 20:20:18.230939 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 20:20:18.230944 kernel: SELinux: policy capability always_check_network=0 Oct 2 20:20:18.230950 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 20:20:18.230955 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 20:20:18.230960 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 20:20:18.230965 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 20:20:18.230971 systemd[1]: Successfully loaded SELinux policy in 304.969ms. Oct 2 20:20:18.230977 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.157ms. Oct 2 20:20:18.230984 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:20:18.230991 systemd[1]: Detected architecture x86-64. Oct 2 20:20:18.230997 systemd[1]: Detected first boot. Oct 2 20:20:18.231002 systemd[1]: Hostname set to . Oct 2 20:20:18.231008 systemd[1]: Initializing machine ID from random generator. Oct 2 20:20:18.231014 systemd[1]: Populated /etc with preset unit settings. Oct 2 20:20:18.231020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:20:18.231026 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:20:18.231033 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:20:18.231039 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 20:20:18.231045 systemd[1]: Stopped initrd-switch-root.service. Oct 2 20:20:18.231051 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 20:20:18.231057 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 20:20:18.231062 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 20:20:18.231070 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 20:20:18.231076 systemd[1]: Created slice system-getty.slice. Oct 2 20:20:18.231081 systemd[1]: Created slice system-modprobe.slice. Oct 2 20:20:18.231087 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 20:20:18.231093 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 20:20:18.231099 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 20:20:18.231104 systemd[1]: Created slice user.slice. Oct 2 20:20:18.231110 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:20:18.231116 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 20:20:18.231123 systemd[1]: Set up automount boot.automount. Oct 2 20:20:18.231128 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 20:20:18.231134 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 20:20:18.231140 systemd[1]: Stopped target initrd-fs.target. Oct 2 20:20:18.231147 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 20:20:18.231153 systemd[1]: Reached target integritysetup.target. Oct 2 20:20:18.231159 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:20:18.231165 systemd[1]: Reached target remote-fs.target. Oct 2 20:20:18.231172 systemd[1]: Reached target slices.target. Oct 2 20:20:18.231178 systemd[1]: Reached target swap.target. Oct 2 20:20:18.231184 systemd[1]: Reached target torcx.target. Oct 2 20:20:18.231190 systemd[1]: Reached target veritysetup.target. Oct 2 20:20:18.231196 systemd[1]: Listening on systemd-coredump.socket. Oct 2 20:20:18.231201 systemd[1]: Listening on systemd-initctl.socket. Oct 2 20:20:18.231207 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:20:18.231215 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:20:18.231221 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:20:18.231227 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 20:20:18.231233 systemd[1]: Mounting dev-hugepages.mount... Oct 2 20:20:18.231239 systemd[1]: Mounting dev-mqueue.mount... Oct 2 20:20:18.231245 systemd[1]: Mounting media.mount... Oct 2 20:20:18.231251 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:20:18.231259 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 20:20:18.231265 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 20:20:18.231271 systemd[1]: Mounting tmp.mount... Oct 2 20:20:18.231277 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 20:20:18.231283 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 20:20:18.231289 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:20:18.231295 systemd[1]: Starting modprobe@configfs.service... Oct 2 20:20:18.231301 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 20:20:18.231307 systemd[1]: Starting modprobe@drm.service... Oct 2 20:20:18.231315 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 20:20:18.231321 systemd[1]: Starting modprobe@fuse.service... Oct 2 20:20:18.231327 kernel: fuse: init (API version 7.34) Oct 2 20:20:18.231332 systemd[1]: Starting modprobe@loop.service... Oct 2 20:20:18.231338 kernel: loop: module loaded Oct 2 20:20:18.231344 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 20:20:18.231350 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 20:20:18.231357 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 20:20:18.231363 kernel: kauditd_printk_skb: 41 callbacks suppressed Oct 2 20:20:18.231369 kernel: audit: type=1131 audit(1696278017.872:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.231375 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 20:20:18.231381 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 20:20:18.231388 kernel: audit: type=1131 audit(1696278017.960:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.231393 systemd[1]: Stopped systemd-journald.service. Oct 2 20:20:18.231399 kernel: audit: type=1130 audit(1696278018.024:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.231408 kernel: audit: type=1131 audit(1696278018.024:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.231414 kernel: audit: type=1334 audit(1696278018.109:73): prog-id=13 op=LOAD Oct 2 20:20:18.231443 kernel: audit: type=1334 audit(1696278018.127:74): prog-id=14 op=LOAD Oct 2 20:20:18.231448 kernel: audit: type=1334 audit(1696278018.145:75): prog-id=15 op=LOAD Oct 2 20:20:18.231454 kernel: audit: type=1334 audit(1696278018.163:76): prog-id=11 op=UNLOAD Oct 2 20:20:18.231475 systemd[1]: Starting systemd-journald.service... Oct 2 20:20:18.231481 kernel: audit: type=1334 audit(1696278018.163:77): prog-id=12 op=UNLOAD Oct 2 20:20:18.231486 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:20:18.231492 kernel: audit: type=1305 audit(1696278018.228:78): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 20:20:18.231501 systemd-journald[938]: Journal started Oct 2 20:20:18.231525 systemd-journald[938]: Runtime Journal (/run/log/journal/9b095031305c446eb54637b5256ae118) is 8.0M, max 640.1M, 632.1M free. Oct 2 20:20:14.747000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 20:20:15.027000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 20:20:15.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:20:15.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:20:15.030000 audit: BPF prog-id=8 op=LOAD Oct 2 20:20:15.030000 audit: BPF prog-id=8 op=UNLOAD Oct 2 20:20:15.030000 audit: BPF prog-id=9 op=LOAD Oct 2 20:20:15.030000 audit: BPF prog-id=9 op=UNLOAD Oct 2 20:20:16.636000 audit: BPF prog-id=10 op=LOAD Oct 2 20:20:16.636000 audit: BPF prog-id=3 op=UNLOAD Oct 2 20:20:16.636000 audit: BPF prog-id=11 op=LOAD Oct 2 20:20:16.636000 audit: BPF prog-id=12 op=LOAD Oct 2 20:20:16.636000 audit: BPF prog-id=4 op=UNLOAD Oct 2 20:20:16.636000 audit: BPF prog-id=5 op=UNLOAD Oct 2 20:20:16.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:16.682000 audit: BPF prog-id=10 op=UNLOAD Oct 2 20:20:16.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:16.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:17.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:17.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.109000 audit: BPF prog-id=13 op=LOAD Oct 2 20:20:18.127000 audit: BPF prog-id=14 op=LOAD Oct 2 20:20:18.145000 audit: BPF prog-id=15 op=LOAD Oct 2 20:20:18.163000 audit: BPF prog-id=11 op=UNLOAD Oct 2 20:20:18.163000 audit: BPF prog-id=12 op=UNLOAD Oct 2 20:20:18.228000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 20:20:16.634786 systemd[1]: Queued start job for default target multi-user.target. Oct 2 20:20:15.107223 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:20:16.634793 systemd[1]: Unnecessary job was removed for dev-sda6.device. Oct 2 20:20:15.107858 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:20:16.637239 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 20:20:15.107876 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:20:15.107901 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 20:20:15.107909 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 20:20:15.107934 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 20:20:15.107944 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 20:20:15.108317 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 20:20:15.108349 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:20:15.108360 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:20:15.108846 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 20:20:15.108876 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 20:20:15.108893 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 20:20:15.108905 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 20:20:15.108919 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 20:20:15.108930 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 20:20:16.274395 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:16Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:20:16.274561 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:16Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:20:16.274621 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:16Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:20:16.274717 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:16Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:20:16.274747 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:16Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 20:20:16.274781 /usr/lib/systemd/system-generators/torcx-generator[830]: time="2023-10-02T20:20:16Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 20:20:18.228000 audit[938]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd3f36ed20 a2=4000 a3=7ffd3f36edbc items=0 ppid=1 pid=938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:18.228000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 20:20:18.308483 systemd[1]: Starting systemd-network-generator.service... Oct 2 20:20:18.335454 systemd[1]: Starting systemd-remount-fs.service... Oct 2 20:20:18.362459 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:20:18.405436 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 20:20:18.405506 systemd[1]: Stopped verity-setup.service. Oct 2 20:20:18.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.450449 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:20:18.470603 systemd[1]: Started systemd-journald.service. Oct 2 20:20:18.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.477937 systemd[1]: Mounted dev-hugepages.mount. Oct 2 20:20:18.485759 systemd[1]: Mounted dev-mqueue.mount. Oct 2 20:20:18.492675 systemd[1]: Mounted media.mount. Oct 2 20:20:18.499683 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 20:20:18.508654 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 20:20:18.517657 systemd[1]: Mounted tmp.mount. Oct 2 20:20:18.524741 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 20:20:18.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.533832 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:20:18.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.542888 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 20:20:18.543020 systemd[1]: Finished modprobe@configfs.service. Oct 2 20:20:18.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.551890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 20:20:18.552052 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 20:20:18.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.560960 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 20:20:18.561190 systemd[1]: Finished modprobe@drm.service. Oct 2 20:20:18.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.570223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 20:20:18.570550 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 20:20:18.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.579218 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 20:20:18.579539 systemd[1]: Finished modprobe@fuse.service. Oct 2 20:20:18.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.588198 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 20:20:18.588516 systemd[1]: Finished modprobe@loop.service. Oct 2 20:20:18.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.597250 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:20:18.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.606319 systemd[1]: Finished systemd-network-generator.service. Oct 2 20:20:18.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.615205 systemd[1]: Finished systemd-remount-fs.service. Oct 2 20:20:18.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.624178 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:20:18.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.633829 systemd[1]: Reached target network-pre.target. Oct 2 20:20:18.645169 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 20:20:18.655997 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 20:20:18.662726 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 20:20:18.665920 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 20:20:18.673558 systemd[1]: Starting systemd-journal-flush.service... Oct 2 20:20:18.676824 systemd-journald[938]: Time spent on flushing to /var/log/journal/9b095031305c446eb54637b5256ae118 is 10.692ms for 1251 entries. Oct 2 20:20:18.676824 systemd-journald[938]: System Journal (/var/log/journal/9b095031305c446eb54637b5256ae118) is 8.0M, max 195.6M, 187.6M free. Oct 2 20:20:18.712536 systemd-journald[938]: Received client request to flush runtime journal. Oct 2 20:20:18.690525 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 20:20:18.691009 systemd[1]: Starting systemd-random-seed.service... Oct 2 20:20:18.706532 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 20:20:18.707043 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:20:18.714089 systemd[1]: Starting systemd-sysusers.service... Oct 2 20:20:18.721079 systemd[1]: Starting systemd-udev-settle.service... Oct 2 20:20:18.728538 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 20:20:18.736583 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 20:20:18.744629 systemd[1]: Finished systemd-journal-flush.service. Oct 2 20:20:18.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.753646 systemd[1]: Finished systemd-random-seed.service. Oct 2 20:20:18.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.761633 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:20:18.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.769621 systemd[1]: Finished systemd-sysusers.service. Oct 2 20:20:18.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:18.778543 systemd[1]: Reached target first-boot-complete.target. Oct 2 20:20:18.786733 udevadm[955]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 20:20:18.992289 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 20:20:19.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:19.001000 audit: BPF prog-id=16 op=LOAD Oct 2 20:20:19.001000 audit: BPF prog-id=17 op=LOAD Oct 2 20:20:19.001000 audit: BPF prog-id=6 op=UNLOAD Oct 2 20:20:19.001000 audit: BPF prog-id=7 op=UNLOAD Oct 2 20:20:19.002633 systemd[1]: Starting systemd-udevd.service... Oct 2 20:20:19.014444 systemd-udevd[956]: Using default interface naming scheme 'v252'. Oct 2 20:20:19.033554 systemd[1]: Started systemd-udevd.service. Oct 2 20:20:19.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:19.044769 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Oct 2 20:20:19.045000 audit: BPF prog-id=18 op=LOAD Oct 2 20:20:19.046082 systemd[1]: Starting systemd-networkd.service... Oct 2 20:20:19.068000 audit: BPF prog-id=19 op=LOAD Oct 2 20:20:19.068000 audit: BPF prog-id=20 op=LOAD Oct 2 20:20:19.068000 audit: BPF prog-id=21 op=LOAD Oct 2 20:20:19.069410 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Oct 2 20:20:19.070395 systemd[1]: Starting systemd-userdbd.service... Oct 2 20:20:19.074000 audit[1020]: AVC avc: denied { confidentiality } for pid=1020 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 20:20:19.093479 kernel: ACPI: button: Sleep Button [SLPB] Oct 2 20:20:19.113488 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:20:19.116413 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 2 20:20:19.116449 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 20:20:19.135410 kernel: ACPI: button: Power Button [PWRF] Oct 2 20:20:19.160561 systemd[1]: Started systemd-userdbd.service. Oct 2 20:20:19.074000 audit[1020]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b001bd7d90 a1=4d8bc a2=7f412b734bc5 a3=5 items=40 ppid=956 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:19.074000 audit: CWD cwd="/" Oct 2 20:20:19.074000 audit: PATH item=0 name=(null) inode=28020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=1 name=(null) inode=28021 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=2 name=(null) inode=28020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=3 name=(null) inode=28022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=4 name=(null) inode=28020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=5 name=(null) inode=28023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=6 name=(null) inode=28023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=7 name=(null) inode=28024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=8 name=(null) inode=28023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=9 name=(null) inode=28025 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=10 name=(null) inode=28023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=11 name=(null) inode=28026 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=12 name=(null) inode=28023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=13 name=(null) inode=28027 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=14 name=(null) inode=28023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=15 name=(null) inode=28028 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=16 name=(null) inode=28020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=17 name=(null) inode=28029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=18 name=(null) inode=28029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=19 name=(null) inode=28030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=20 name=(null) inode=28029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=21 name=(null) inode=28031 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=22 name=(null) inode=28029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=23 name=(null) inode=28032 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=24 name=(null) inode=28029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=25 name=(null) inode=28033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=26 name=(null) inode=28029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=27 name=(null) inode=28034 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=28 name=(null) inode=28020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=29 name=(null) inode=28035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=30 name=(null) inode=28035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=31 name=(null) inode=28036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=32 name=(null) inode=28035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=33 name=(null) inode=28037 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=34 name=(null) inode=28035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=35 name=(null) inode=28038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=36 name=(null) inode=28035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=37 name=(null) inode=28039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=38 name=(null) inode=28035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PATH item=39 name=(null) inode=28040 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:20:19.074000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 20:20:19.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:19.210416 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Oct 2 20:20:19.210635 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Oct 2 20:20:19.231414 kernel: IPMI message handler: version 39.2 Oct 2 20:20:19.292540 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Oct 2 20:20:19.292742 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Oct 2 20:20:19.292883 kernel: ipmi device interface Oct 2 20:20:19.310461 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Oct 2 20:20:19.352423 kernel: iTCO_vendor_support: vendor-support=0 Oct 2 20:20:19.390441 kernel: ipmi_si: IPMI System Interface driver Oct 2 20:20:19.390464 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Oct 2 20:20:19.390538 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Oct 2 20:20:19.410780 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Oct 2 20:20:19.448609 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Oct 2 20:20:19.448738 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Oct 2 20:20:19.466027 systemd-networkd[1004]: bond0: netdev ready Oct 2 20:20:19.468136 systemd-networkd[1004]: lo: Link UP Oct 2 20:20:19.468139 systemd-networkd[1004]: lo: Gained carrier Oct 2 20:20:19.468446 systemd-networkd[1004]: Enumeration completed Oct 2 20:20:19.468531 systemd[1]: Started systemd-networkd.service. Oct 2 20:20:19.468710 systemd-networkd[1004]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Oct 2 20:20:19.473836 systemd-networkd[1004]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f9:a5.network. Oct 2 20:20:19.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:19.512227 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Oct 2 20:20:19.512334 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Oct 2 20:20:19.512410 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Oct 2 20:20:19.551625 kernel: ipmi_si: Adding ACPI-specified kcs state machine Oct 2 20:20:19.552406 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Oct 2 20:20:19.613652 kernel: intel_rapl_common: Found RAPL domain package Oct 2 20:20:19.613686 kernel: intel_rapl_common: Found RAPL domain core Oct 2 20:20:19.613702 kernel: intel_rapl_common: Found RAPL domain dram Oct 2 20:20:19.629722 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Oct 2 20:20:19.704451 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Oct 2 20:20:19.904446 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Oct 2 20:20:19.923457 kernel: ipmi_ssif: IPMI SSIF Interface driver Oct 2 20:20:19.926713 systemd[1]: Finished systemd-udev-settle.service. Oct 2 20:20:19.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:19.935164 systemd[1]: Starting lvm2-activation-early.service... Oct 2 20:20:19.952250 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:20:19.982134 systemd[1]: Finished lvm2-activation-early.service. Oct 2 20:20:19.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:19.990590 systemd[1]: Reached target cryptsetup.target. Oct 2 20:20:20.005506 systemd[1]: Starting lvm2-activation.service... Oct 2 20:20:20.009140 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:20:20.010445 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Oct 2 20:20:20.035263 systemd-networkd[1004]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f9:a4.network. Oct 2 20:20:20.035420 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Oct 2 20:20:20.056494 systemd[1]: Finished lvm2-activation.service. Oct 2 20:20:20.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:20.064572 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:20:20.072523 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 20:20:20.072539 systemd[1]: Reached target local-fs.target. Oct 2 20:20:20.080610 systemd[1]: Reached target machines.target. Oct 2 20:20:20.089176 systemd[1]: Starting ldconfig.service... Oct 2 20:20:20.105892 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 20:20:20.105914 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:20:20.106457 systemd[1]: Starting systemd-boot-update.service... Oct 2 20:20:20.111442 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Oct 2 20:20:20.118010 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 20:20:20.130309 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 20:20:20.130729 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:20:20.130856 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:20:20.133223 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 20:20:20.133453 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1066 (bootctl) Oct 2 20:20:20.134172 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 20:20:20.144346 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 20:20:20.144688 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 20:20:20.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:20.144873 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 20:20:20.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:20.148436 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 20:20:20.155109 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 20:20:20.178268 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 20:20:20.234036 systemd-fsck[1074]: fsck.fat 4.2 (2021-01-31) Oct 2 20:20:20.234036 systemd-fsck[1074]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 20:20:20.234414 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Oct 2 20:20:20.234650 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Oct 2 20:20:20.243625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 20:20:20.254412 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Oct 2 20:20:20.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:20.273453 systemd[1]: Mounting boot.mount... Oct 2 20:20:20.294408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Oct 2 20:20:20.295106 systemd-networkd[1004]: bond0: Link UP Oct 2 20:20:20.295303 systemd-networkd[1004]: enp1s0f1np1: Link UP Oct 2 20:20:20.295467 systemd-networkd[1004]: enp1s0f0np0: Link UP Oct 2 20:20:20.295595 systemd-networkd[1004]: enp1s0f1np1: Gained carrier Oct 2 20:20:20.296570 systemd-networkd[1004]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f9:a4.network. Oct 2 20:20:20.301029 systemd[1]: Mounted boot.mount. Oct 2 20:20:20.329825 systemd[1]: Finished systemd-boot-update.service. Oct 2 20:20:20.345471 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Oct 2 20:20:20.345597 kernel: bond0: active interface up! Oct 2 20:20:20.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:20.394224 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 20:20:20.404024 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Oct 2 20:20:20.404049 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Oct 2 20:20:20.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:20.413302 systemd[1]: Starting audit-rules.service... Oct 2 20:20:20.420999 systemd[1]: Starting clean-ca-certificates.service... Oct 2 20:20:20.430022 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 20:20:20.433000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 20:20:20.433000 audit[1097]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7273f350 a2=420 a3=0 items=0 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:20.433000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 20:20:20.433967 augenrules[1097]: No rules Oct 2 20:20:20.439471 systemd[1]: Starting systemd-resolved.service... Oct 2 20:20:20.447416 systemd[1]: Starting systemd-timesyncd.service... Oct 2 20:20:20.463993 systemd[1]: Starting systemd-update-utmp.service... Oct 2 20:20:20.472407 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 20:20:20.487797 systemd[1]: Finished audit-rules.service. Oct 2 20:20:20.492984 ldconfig[1065]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 20:20:20.494405 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 20:20:20.515405 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 20:20:20.516680 systemd-networkd[1004]: bond0: Gained carrier Oct 2 20:20:20.516767 systemd-networkd[1004]: enp1s0f0np0: Gained carrier Oct 2 20:20:20.545129 systemd[1]: Finished ldconfig.service. Oct 2 20:20:20.555410 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 20:20:20.555425 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Oct 2 20:20:20.570614 systemd[1]: Finished clean-ca-certificates.service. Oct 2 20:20:20.577405 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Oct 2 20:20:20.577727 systemd-networkd[1004]: enp1s0f1np1: Link DOWN Oct 2 20:20:20.577730 systemd-networkd[1004]: enp1s0f1np1: Lost carrier Oct 2 20:20:20.585611 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 20:20:20.598250 systemd[1]: Starting systemd-update-done.service... Oct 2 20:20:20.605447 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 20:20:20.605698 systemd[1]: Finished systemd-update-utmp.service. Oct 2 20:20:20.613598 systemd[1]: Finished systemd-update-done.service. Oct 2 20:20:20.622722 systemd[1]: Started systemd-timesyncd.service. Oct 2 20:20:20.625578 systemd-resolved[1102]: Positive Trust Anchors: Oct 2 20:20:20.625583 systemd-resolved[1102]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:20:20.625603 systemd-resolved[1102]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:20:20.630680 systemd[1]: Reached target time-set.target. Oct 2 20:20:20.645466 systemd-resolved[1102]: Using system hostname 'ci-3510.3.0-a-444ba07d62'. Oct 2 20:20:20.735408 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Oct 2 20:20:20.754408 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Oct 2 20:20:20.755973 systemd-networkd[1004]: enp1s0f1np1: Link UP Oct 2 20:20:20.756129 systemd-timesyncd[1103]: Network configuration changed, trying to establish connection. Oct 2 20:20:20.756160 systemd-networkd[1004]: enp1s0f1np1: Gained carrier Oct 2 20:20:20.756166 systemd-timesyncd[1103]: Network configuration changed, trying to establish connection. Oct 2 20:20:20.756992 systemd[1]: Started systemd-resolved.service. Oct 2 20:20:20.765514 systemd[1]: Reached target network.target. Oct 2 20:20:20.767560 systemd-timesyncd[1103]: Network configuration changed, trying to establish connection. Oct 2 20:20:20.767604 systemd-timesyncd[1103]: Network configuration changed, trying to establish connection. Oct 2 20:20:20.773484 systemd[1]: Reached target nss-lookup.target. Oct 2 20:20:20.781482 systemd[1]: Reached target sysinit.target. Oct 2 20:20:20.796515 systemd[1]: Started motdgen.path. Oct 2 20:20:20.804444 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Oct 2 20:20:20.819501 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 20:20:20.824435 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Oct 2 20:20:20.833541 systemd[1]: Started logrotate.timer. Oct 2 20:20:20.840533 systemd[1]: Started mdadm.timer. Oct 2 20:20:20.847484 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 20:20:20.855481 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 20:20:20.855496 systemd[1]: Reached target paths.target. Oct 2 20:20:20.862477 systemd[1]: Reached target timers.target. Oct 2 20:20:20.869618 systemd[1]: Listening on dbus.socket. Oct 2 20:20:20.877007 systemd[1]: Starting docker.socket... Oct 2 20:20:20.884895 systemd[1]: Listening on sshd.socket. Oct 2 20:20:20.891486 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:20:20.891723 systemd[1]: Listening on docker.socket. Oct 2 20:20:20.898483 systemd[1]: Reached target sockets.target. Oct 2 20:20:20.906436 systemd[1]: Reached target basic.target. Oct 2 20:20:20.913456 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:20:20.913469 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:20:20.913915 systemd[1]: Starting containerd.service... Oct 2 20:20:20.920836 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 20:20:20.928989 systemd[1]: Starting coreos-metadata.service... Oct 2 20:20:20.936123 systemd[1]: Starting dbus.service... Oct 2 20:20:20.941924 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 20:20:20.947598 jq[1115]: false Oct 2 20:20:20.949981 systemd[1]: Starting extend-filesystems.service... Oct 2 20:20:20.956460 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 20:20:20.957118 systemd[1]: Starting motdgen.service... Oct 2 20:20:20.958440 extend-filesystems[1118]: Found sda Oct 2 20:20:20.977533 extend-filesystems[1118]: Found sda1 Oct 2 20:20:20.977533 extend-filesystems[1118]: Found sda2 Oct 2 20:20:20.977533 extend-filesystems[1118]: Found sda3 Oct 2 20:20:20.977533 extend-filesystems[1118]: Found usr Oct 2 20:20:20.977533 extend-filesystems[1118]: Found sda4 Oct 2 20:20:20.977533 extend-filesystems[1118]: Found sda6 Oct 2 20:20:20.977533 extend-filesystems[1118]: Found sda7 Oct 2 20:20:20.977533 extend-filesystems[1118]: Found sda9 Oct 2 20:20:20.977533 extend-filesystems[1118]: Checking size of /dev/sda9 Oct 2 20:20:20.977533 extend-filesystems[1118]: Resized partition /dev/sda9 Oct 2 20:20:21.091491 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Oct 2 20:20:20.958783 dbus-daemon[1114]: [system] SELinux support is enabled Oct 2 20:20:20.964140 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 20:20:21.091701 coreos-metadata[1110]: Oct 02 20:20:20.985 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Oct 2 20:20:21.091793 extend-filesystems[1133]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 20:20:21.107457 coreos-metadata[1111]: Oct 02 20:20:20.985 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Oct 2 20:20:21.001286 systemd[1]: Starting prepare-critools.service... Oct 2 20:20:21.015102 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 20:20:21.029034 systemd[1]: Starting sshd-keygen.service... Oct 2 20:20:21.047915 systemd[1]: Starting systemd-logind.service... Oct 2 20:20:21.064441 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:20:21.065005 systemd[1]: Starting tcsd.service... Oct 2 20:20:21.108086 jq[1149]: true Oct 2 20:20:21.069127 systemd-logind[1146]: Watching system buttons on /dev/input/event3 (Power Button) Oct 2 20:20:21.069137 systemd-logind[1146]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 2 20:20:21.069147 systemd-logind[1146]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Oct 2 20:20:21.069241 systemd-logind[1146]: New seat seat0. Oct 2 20:20:21.083792 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 20:20:21.084149 systemd[1]: Starting update-engine.service... Oct 2 20:20:21.099163 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 20:20:21.115753 systemd[1]: Started dbus.service. Oct 2 20:20:21.124098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 20:20:21.124184 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 20:20:21.124352 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 20:20:21.124436 systemd[1]: Finished motdgen.service. Oct 2 20:20:21.132244 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 20:20:21.132341 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 20:20:21.135858 update_engine[1148]: I1002 20:20:21.135100 1148 main.cc:92] Flatcar Update Engine starting Oct 2 20:20:21.138998 update_engine[1148]: I1002 20:20:21.138962 1148 update_check_scheduler.cc:74] Next update check in 8m14s Oct 2 20:20:21.140700 tar[1151]: ./ Oct 2 20:20:21.140700 tar[1151]: ./macvlan Oct 2 20:20:21.143135 jq[1155]: false Oct 2 20:20:21.143640 dbus-daemon[1114]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 20:20:21.144174 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Oct 2 20:20:21.144255 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Oct 2 20:20:21.144435 tar[1152]: crictl Oct 2 20:20:21.149053 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Oct 2 20:20:21.149140 systemd[1]: Condition check resulted in tcsd.service being skipped. Oct 2 20:20:21.150228 systemd[1]: Started systemd-logind.service. Oct 2 20:20:21.152321 env[1156]: time="2023-10-02T20:20:21.152299972Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 20:20:21.160715 env[1156]: time="2023-10-02T20:20:21.160697317Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 20:20:21.160776 env[1156]: time="2023-10-02T20:20:21.160768201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161369 env[1156]: time="2023-10-02T20:20:21.161355338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161369 env[1156]: time="2023-10-02T20:20:21.161368706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161495 env[1156]: time="2023-10-02T20:20:21.161485099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161515 env[1156]: time="2023-10-02T20:20:21.161495216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161515 env[1156]: time="2023-10-02T20:20:21.161502052Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 20:20:21.161515 env[1156]: time="2023-10-02T20:20:21.161507442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161567 env[1156]: time="2023-10-02T20:20:21.161544714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161675 env[1156]: time="2023-10-02T20:20:21.161667543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161740 env[1156]: time="2023-10-02T20:20:21.161731304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:20:21.161770 env[1156]: time="2023-10-02T20:20:21.161741222Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 20:20:21.161770 env[1156]: time="2023-10-02T20:20:21.161766591Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 20:20:21.161815 env[1156]: time="2023-10-02T20:20:21.161773446Z" level=info msg="metadata content store policy set" policy=shared Oct 2 20:20:21.164473 systemd[1]: Started update-engine.service. Oct 2 20:20:21.177049 systemd[1]: Started locksmithd.service. Oct 2 20:20:21.177261 tar[1151]: ./static Oct 2 20:20:21.177325 env[1156]: time="2023-10-02T20:20:21.177312770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 20:20:21.177352 env[1156]: time="2023-10-02T20:20:21.177331968Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 20:20:21.177352 env[1156]: time="2023-10-02T20:20:21.177343714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 20:20:21.177411 env[1156]: time="2023-10-02T20:20:21.177365489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177411 env[1156]: time="2023-10-02T20:20:21.177376348Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177411 env[1156]: time="2023-10-02T20:20:21.177386137Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177411 env[1156]: time="2023-10-02T20:20:21.177392816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177411 env[1156]: time="2023-10-02T20:20:21.177400226Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177501 env[1156]: time="2023-10-02T20:20:21.177413264Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177501 env[1156]: time="2023-10-02T20:20:21.177425830Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177501 env[1156]: time="2023-10-02T20:20:21.177433779Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177501 env[1156]: time="2023-10-02T20:20:21.177442668Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 20:20:21.177564 env[1156]: time="2023-10-02T20:20:21.177501364Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 20:20:21.177564 env[1156]: time="2023-10-02T20:20:21.177552372Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 20:20:21.177710 env[1156]: time="2023-10-02T20:20:21.177701394Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 20:20:21.177739 env[1156]: time="2023-10-02T20:20:21.177717354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177739 env[1156]: time="2023-10-02T20:20:21.177725905Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 20:20:21.177770 env[1156]: time="2023-10-02T20:20:21.177757318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177770 env[1156]: time="2023-10-02T20:20:21.177765944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177801 env[1156]: time="2023-10-02T20:20:21.177773138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177801 env[1156]: time="2023-10-02T20:20:21.177779274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177801 env[1156]: time="2023-10-02T20:20:21.177785647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177801 env[1156]: time="2023-10-02T20:20:21.177791848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177801 env[1156]: time="2023-10-02T20:20:21.177797883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177876 env[1156]: time="2023-10-02T20:20:21.177804117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177876 env[1156]: time="2023-10-02T20:20:21.177811846Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 20:20:21.177931 env[1156]: time="2023-10-02T20:20:21.177885736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177931 env[1156]: time="2023-10-02T20:20:21.177896243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177931 env[1156]: time="2023-10-02T20:20:21.177905644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.177931 env[1156]: time="2023-10-02T20:20:21.177918758Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 20:20:21.177931 env[1156]: time="2023-10-02T20:20:21.177927491Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 20:20:21.178023 env[1156]: time="2023-10-02T20:20:21.177933362Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 20:20:21.178023 env[1156]: time="2023-10-02T20:20:21.177945956Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 20:20:21.178023 env[1156]: time="2023-10-02T20:20:21.177970506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 20:20:21.178125 env[1156]: time="2023-10-02T20:20:21.178098761Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178132635Z" level=info msg="Connect containerd service" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178151954Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178429755Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178748853Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178784108Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178518072Z" level=info msg="Start subscribing containerd event" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178824012Z" level=info msg="containerd successfully booted in 0.026839s" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178851727Z" level=info msg="Start recovering state" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178918623Z" level=info msg="Start event monitor" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178932148Z" level=info msg="Start snapshots syncer" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178946460Z" level=info msg="Start cni network conf syncer for default" Oct 2 20:20:21.180362 env[1156]: time="2023-10-02T20:20:21.178996729Z" level=info msg="Start streaming server" Oct 2 20:20:21.184583 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 20:20:21.184727 systemd[1]: Reached target system-config.target. Oct 2 20:20:21.192546 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 20:20:21.192668 systemd[1]: Reached target user-config.target. Oct 2 20:20:21.199159 tar[1151]: ./vlan Oct 2 20:20:21.203212 systemd[1]: Started containerd.service. Oct 2 20:20:21.228589 tar[1151]: ./portmap Oct 2 20:20:21.256267 tar[1151]: ./host-local Oct 2 20:20:21.258803 locksmithd[1174]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 20:20:21.277416 tar[1151]: ./vrf Oct 2 20:20:21.299933 tar[1151]: ./bridge Oct 2 20:20:21.326890 tar[1151]: ./tuning Oct 2 20:20:21.348488 tar[1151]: ./firewall Oct 2 20:20:21.353039 systemd[1]: Finished prepare-critools.service. Oct 2 20:20:21.374858 tar[1151]: ./host-device Oct 2 20:20:21.398682 sshd_keygen[1145]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 20:20:21.399523 tar[1151]: ./sbr Oct 2 20:20:21.410265 systemd[1]: Finished sshd-keygen.service. Oct 2 20:20:21.417368 systemd[1]: Starting issuegen.service... Oct 2 20:20:21.420498 tar[1151]: ./loopback Oct 2 20:20:21.424666 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 20:20:21.424740 systemd[1]: Finished issuegen.service. Oct 2 20:20:21.432228 systemd[1]: Starting systemd-user-sessions.service... Oct 2 20:20:21.440687 systemd[1]: Finished systemd-user-sessions.service. Oct 2 20:20:21.441563 tar[1151]: ./dhcp Oct 2 20:20:21.449211 systemd[1]: Started getty@tty1.service. Oct 2 20:20:21.456161 systemd[1]: Started serial-getty@ttyS1.service. Oct 2 20:20:21.464562 systemd[1]: Reached target getty.target. Oct 2 20:20:21.500702 tar[1151]: ./ptp Oct 2 20:20:21.525419 tar[1151]: ./ipvlan Oct 2 20:20:21.549945 tar[1151]: ./bandwidth Oct 2 20:20:21.578214 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 20:20:21.602469 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Oct 2 20:20:21.630817 extend-filesystems[1133]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 2 20:20:21.630817 extend-filesystems[1133]: old_desc_blocks = 1, new_desc_blocks = 56 Oct 2 20:20:21.630817 extend-filesystems[1133]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Oct 2 20:20:21.667529 extend-filesystems[1118]: Resized filesystem in /dev/sda9 Oct 2 20:20:21.667529 extend-filesystems[1118]: Found sdb Oct 2 20:20:21.631236 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 20:20:21.631313 systemd[1]: Finished extend-filesystems.service. Oct 2 20:20:21.817523 systemd-networkd[1004]: bond0: Gained IPv6LL Oct 2 20:20:21.817795 systemd-timesyncd[1103]: Network configuration changed, trying to establish connection. Oct 2 20:20:22.201638 systemd-timesyncd[1103]: Network configuration changed, trying to establish connection. Oct 2 20:20:22.201674 systemd-timesyncd[1103]: Network configuration changed, trying to establish connection. Oct 2 20:20:23.001603 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Oct 2 20:20:26.480225 login[1197]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 20:20:26.486141 login[1196]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 20:20:26.488955 systemd-logind[1146]: New session 1 of user core. Oct 2 20:20:26.489784 systemd[1]: Created slice user-500.slice. Oct 2 20:20:26.490475 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 20:20:26.491842 systemd-logind[1146]: New session 2 of user core. Oct 2 20:20:26.495777 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 20:20:26.496584 systemd[1]: Starting user@500.service... Oct 2 20:20:26.499848 (systemd)[1205]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:26.718244 systemd[1205]: Queued start job for default target default.target. Oct 2 20:20:26.718831 systemd[1205]: Reached target paths.target. Oct 2 20:20:26.718872 systemd[1205]: Reached target sockets.target. Oct 2 20:20:26.718903 systemd[1205]: Reached target timers.target. Oct 2 20:20:26.718934 systemd[1205]: Reached target basic.target. Oct 2 20:20:26.718999 systemd[1205]: Reached target default.target. Oct 2 20:20:26.719049 systemd[1205]: Startup finished in 203ms. Oct 2 20:20:26.719088 systemd[1]: Started user@500.service. Oct 2 20:20:26.720233 systemd[1]: Started session-1.scope. Oct 2 20:20:26.720982 systemd[1]: Started session-2.scope. Oct 2 20:20:27.130565 coreos-metadata[1111]: Oct 02 20:20:27.130 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Oct 2 20:20:27.131310 coreos-metadata[1110]: Oct 02 20:20:27.130 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Oct 2 20:20:28.130946 coreos-metadata[1111]: Oct 02 20:20:28.130 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Oct 2 20:20:28.131765 coreos-metadata[1110]: Oct 02 20:20:28.130 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Oct 2 20:20:28.424458 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Oct 2 20:20:28.431437 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Oct 2 20:20:29.161258 coreos-metadata[1111]: Oct 02 20:20:29.161 INFO Fetch successful Oct 2 20:20:29.161519 coreos-metadata[1110]: Oct 02 20:20:29.161 INFO Fetch successful Oct 2 20:20:29.182439 systemd[1]: Finished coreos-metadata.service. Oct 2 20:20:29.183386 systemd[1]: Started packet-phone-home.service. Oct 2 20:20:29.184109 unknown[1110]: wrote ssh authorized keys file for user: core Oct 2 20:20:29.189608 curl[1227]: % Total % Received % Xferd Average Speed Time Time Time Current Oct 2 20:20:29.189608 curl[1227]: Dload Upload Total Spent Left Speed Oct 2 20:20:29.201341 update-ssh-keys[1228]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:20:29.201581 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 20:20:29.201807 systemd[1]: Reached target multi-user.target. Oct 2 20:20:29.202466 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 20:20:29.206486 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 20:20:29.206557 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 20:20:29.206691 systemd[1]: Startup finished in 1.849s (kernel) + 6.578s (initrd) + 14.795s (userspace) = 23.224s. Oct 2 20:20:29.342832 curl[1227]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Oct 2 20:20:29.345180 systemd[1]: packet-phone-home.service: Deactivated successfully. Oct 2 20:20:36.593221 systemd[1]: Created slice system-sshd.slice. Oct 2 20:20:36.593888 systemd[1]: Started sshd@0-139.178.89.245:22-139.178.89.65:55468.service. Oct 2 20:20:36.640396 sshd[1237]: Accepted publickey for core from 139.178.89.65 port 55468 ssh2: RSA SHA256:M4fPbwtaE29dxMxiSyaa1yIvxglYsCEa+0scmSZWaB4 Oct 2 20:20:36.641708 sshd[1237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:36.646241 systemd-logind[1146]: New session 3 of user core. Oct 2 20:20:36.648021 systemd[1]: Started session-3.scope. Oct 2 20:20:36.707818 systemd[1]: Started sshd@1-139.178.89.245:22-139.178.89.65:55482.service. Oct 2 20:20:36.741409 sshd[1242]: Accepted publickey for core from 139.178.89.65 port 55482 ssh2: RSA SHA256:M4fPbwtaE29dxMxiSyaa1yIvxglYsCEa+0scmSZWaB4 Oct 2 20:20:36.742058 sshd[1242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:36.744286 systemd-logind[1146]: New session 4 of user core. Oct 2 20:20:36.744848 systemd[1]: Started session-4.scope. Oct 2 20:20:36.797551 sshd[1242]: pam_unix(sshd:session): session closed for user core Oct 2 20:20:36.799068 systemd[1]: sshd@1-139.178.89.245:22-139.178.89.65:55482.service: Deactivated successfully. Oct 2 20:20:36.799368 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 20:20:36.799764 systemd-logind[1146]: Session 4 logged out. Waiting for processes to exit. Oct 2 20:20:36.800271 systemd[1]: Started sshd@2-139.178.89.245:22-139.178.89.65:55490.service. Oct 2 20:20:36.800741 systemd-logind[1146]: Removed session 4. Oct 2 20:20:36.834516 sshd[1248]: Accepted publickey for core from 139.178.89.65 port 55490 ssh2: RSA SHA256:M4fPbwtaE29dxMxiSyaa1yIvxglYsCEa+0scmSZWaB4 Oct 2 20:20:36.835437 sshd[1248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:36.838638 systemd-logind[1146]: New session 5 of user core. Oct 2 20:20:36.839334 systemd[1]: Started session-5.scope. Oct 2 20:20:36.894130 sshd[1248]: pam_unix(sshd:session): session closed for user core Oct 2 20:20:36.900710 systemd[1]: sshd@2-139.178.89.245:22-139.178.89.65:55490.service: Deactivated successfully. Oct 2 20:20:36.902249 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 20:20:36.903970 systemd-logind[1146]: Session 5 logged out. Waiting for processes to exit. Oct 2 20:20:36.906501 systemd[1]: Started sshd@3-139.178.89.245:22-139.178.89.65:55498.service. Oct 2 20:20:36.908890 systemd-logind[1146]: Removed session 5. Oct 2 20:20:36.978398 sshd[1255]: Accepted publickey for core from 139.178.89.65 port 55498 ssh2: RSA SHA256:M4fPbwtaE29dxMxiSyaa1yIvxglYsCEa+0scmSZWaB4 Oct 2 20:20:36.981524 sshd[1255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:36.991662 systemd-logind[1146]: New session 6 of user core. Oct 2 20:20:36.994065 systemd[1]: Started session-6.scope. Oct 2 20:20:37.074490 sshd[1255]: pam_unix(sshd:session): session closed for user core Oct 2 20:20:37.080863 systemd[1]: sshd@3-139.178.89.245:22-139.178.89.65:55498.service: Deactivated successfully. Oct 2 20:20:37.082480 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 20:20:37.084159 systemd-logind[1146]: Session 6 logged out. Waiting for processes to exit. Oct 2 20:20:37.086778 systemd[1]: Started sshd@4-139.178.89.245:22-139.178.89.65:55512.service. Oct 2 20:20:37.089190 systemd-logind[1146]: Removed session 6. Oct 2 20:20:37.159212 sshd[1261]: Accepted publickey for core from 139.178.89.65 port 55512 ssh2: RSA SHA256:M4fPbwtaE29dxMxiSyaa1yIvxglYsCEa+0scmSZWaB4 Oct 2 20:20:37.162232 sshd[1261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:37.172534 systemd-logind[1146]: New session 7 of user core. Oct 2 20:20:37.174914 systemd[1]: Started session-7.scope. Oct 2 20:20:37.276129 sudo[1264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 20:20:37.276734 sudo[1264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:20:37.294849 dbus-daemon[1114]: \xd0\xfd%\xf3[U: received setenforce notice (enforcing=778066272) Oct 2 20:20:37.299861 sudo[1264]: pam_unix(sudo:session): session closed for user root Oct 2 20:20:37.305510 sshd[1261]: pam_unix(sshd:session): session closed for user core Oct 2 20:20:37.312346 systemd[1]: sshd@4-139.178.89.245:22-139.178.89.65:55512.service: Deactivated successfully. Oct 2 20:20:37.313977 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 20:20:37.315673 systemd-logind[1146]: Session 7 logged out. Waiting for processes to exit. Oct 2 20:20:37.318349 systemd[1]: Started sshd@5-139.178.89.245:22-139.178.89.65:55524.service. Oct 2 20:20:37.320739 systemd-logind[1146]: Removed session 7. Oct 2 20:20:37.391510 sshd[1268]: Accepted publickey for core from 139.178.89.65 port 55524 ssh2: RSA SHA256:M4fPbwtaE29dxMxiSyaa1yIvxglYsCEa+0scmSZWaB4 Oct 2 20:20:37.394888 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:37.405071 systemd-logind[1146]: New session 8 of user core. Oct 2 20:20:37.407521 systemd[1]: Started session-8.scope. Oct 2 20:20:37.479807 sudo[1272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 20:20:37.479911 sudo[1272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:20:37.481615 sudo[1272]: pam_unix(sudo:session): session closed for user root Oct 2 20:20:37.483822 sudo[1271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 20:20:37.483926 sudo[1271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:20:37.489050 systemd[1]: Stopping audit-rules.service... Oct 2 20:20:37.489000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:20:37.489839 auditctl[1275]: No rules Oct 2 20:20:37.490028 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 20:20:37.490115 systemd[1]: Stopped audit-rules.service. Oct 2 20:20:37.490888 systemd[1]: Starting audit-rules.service... Oct 2 20:20:37.495259 kernel: kauditd_printk_skb: 93 callbacks suppressed Oct 2 20:20:37.495288 kernel: audit: type=1305 audit(1696278037.489:125): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:20:37.501180 augenrules[1292]: No rules Oct 2 20:20:37.501500 systemd[1]: Finished audit-rules.service. Oct 2 20:20:37.501960 sudo[1271]: pam_unix(sudo:session): session closed for user root Oct 2 20:20:37.504888 systemd[1]: Started sshd@6-139.178.89.245:22-139.178.89.65:55540.service. Oct 2 20:20:37.489000 audit[1275]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc94d57b10 a2=420 a3=0 items=0 ppid=1 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:37.510419 sshd[1268]: pam_unix(sshd:session): session closed for user core Oct 2 20:20:37.511910 systemd[1]: sshd@5-139.178.89.245:22-139.178.89.65:55524.service: Deactivated successfully. Oct 2 20:20:37.512273 systemd[1]: session-8.scope: Deactivated successfully. Oct 2 20:20:37.512730 systemd-logind[1146]: Session 8 logged out. Waiting for processes to exit. Oct 2 20:20:37.513281 systemd-logind[1146]: Removed session 8. Oct 2 20:20:37.541867 kernel: audit: type=1300 audit(1696278037.489:125): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc94d57b10 a2=420 a3=0 items=0 ppid=1 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:37.541936 kernel: audit: type=1327 audit(1696278037.489:125): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:20:37.489000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:20:37.551391 kernel: audit: type=1131 audit(1696278037.489:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.573860 kernel: audit: type=1130 audit(1696278037.501:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.596353 kernel: audit: type=1106 audit(1696278037.501:128): pid=1271 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.501000 audit[1271]: USER_END pid=1271 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.601420 sshd[1297]: Accepted publickey for core from 139.178.89.65 port 55540 ssh2: RSA SHA256:M4fPbwtaE29dxMxiSyaa1yIvxglYsCEa+0scmSZWaB4 Oct 2 20:20:37.602713 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:20:37.605077 systemd-logind[1146]: New session 9 of user core. Oct 2 20:20:37.605532 systemd[1]: Started session-9.scope. Oct 2 20:20:37.622315 kernel: audit: type=1104 audit(1696278037.501:129): pid=1271 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.501000 audit[1271]: CRED_DISP pid=1271 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.645842 kernel: audit: type=1130 audit(1696278037.504:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.89.245:22-139.178.89.65:55540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.89.245:22-139.178.89.65:55540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.652256 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 20:20:37.652365 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:20:37.671321 kernel: audit: type=1106 audit(1696278037.510:131): pid=1268 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.510000 audit[1268]: USER_END pid=1268 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.703310 kernel: audit: type=1104 audit(1696278037.510:132): pid=1268 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.510000 audit[1268]: CRED_DISP pid=1268 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.89.245:22-139.178.89.65:55524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.600000 audit[1297]: USER_ACCT pid=1297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.602000 audit[1297]: CRED_ACQ pid=1297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.602000 audit[1297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb6c7cad0 a2=3 a3=0 items=0 ppid=1 pid=1297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:37.602000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 20:20:37.607000 audit[1297]: USER_START pid=1297 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.607000 audit[1300]: CRED_ACQ pid=1300 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:37.651000 audit[1301]: USER_ACCT pid=1301 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.651000 audit[1301]: CRED_REFR pid=1301 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:37.653000 audit[1301]: USER_START pid=1301 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:38.137299 systemd[1]: Reloading. Oct 2 20:20:38.167351 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2023-10-02T20:20:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:20:38.167366 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2023-10-02T20:20:38Z" level=info msg="torcx already run" Oct 2 20:20:38.233876 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:20:38.233888 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:20:38.248036 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.294000 audit: BPF prog-id=29 op=LOAD Oct 2 20:20:38.294000 audit: BPF prog-id=23 op=UNLOAD Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit: BPF prog-id=30 op=LOAD Oct 2 20:20:38.295000 audit: BPF prog-id=13 op=UNLOAD Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit: BPF prog-id=31 op=LOAD Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit: BPF prog-id=32 op=LOAD Oct 2 20:20:38.295000 audit: BPF prog-id=14 op=UNLOAD Oct 2 20:20:38.295000 audit: BPF prog-id=15 op=UNLOAD Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit: BPF prog-id=33 op=LOAD Oct 2 20:20:38.295000 audit: BPF prog-id=19 op=UNLOAD Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit: BPF prog-id=34 op=LOAD Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit: BPF prog-id=35 op=LOAD Oct 2 20:20:38.296000 audit: BPF prog-id=20 op=UNLOAD Oct 2 20:20:38.296000 audit: BPF prog-id=21 op=UNLOAD Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit: BPF prog-id=36 op=LOAD Oct 2 20:20:38.296000 audit: BPF prog-id=24 op=UNLOAD Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit: BPF prog-id=37 op=LOAD Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.296000 audit: BPF prog-id=38 op=LOAD Oct 2 20:20:38.296000 audit: BPF prog-id=25 op=UNLOAD Oct 2 20:20:38.297000 audit: BPF prog-id=26 op=UNLOAD Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.297000 audit: BPF prog-id=39 op=LOAD Oct 2 20:20:38.297000 audit: BPF prog-id=18 op=UNLOAD Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit: BPF prog-id=40 op=LOAD Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit: BPF prog-id=41 op=LOAD Oct 2 20:20:38.299000 audit: BPF prog-id=16 op=UNLOAD Oct 2 20:20:38.299000 audit: BPF prog-id=17 op=UNLOAD Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.299000 audit: BPF prog-id=42 op=LOAD Oct 2 20:20:38.299000 audit: BPF prog-id=27 op=UNLOAD Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:38.300000 audit: BPF prog-id=43 op=LOAD Oct 2 20:20:38.300000 audit: BPF prog-id=22 op=UNLOAD Oct 2 20:20:38.306071 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 20:20:38.331038 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 20:20:38.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:38.331425 systemd[1]: Reached target network-online.target. Oct 2 20:20:38.332227 systemd[1]: Started kubelet.service. Oct 2 20:20:38.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:38.930607 kubelet[1388]: E1002 20:20:38.930496 1388 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 20:20:38.935579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 20:20:38.935886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 20:20:38.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 20:20:39.317302 systemd[1]: Stopped kubelet.service. Oct 2 20:20:39.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:39.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:39.334627 systemd[1]: Reloading. Oct 2 20:20:39.366716 /usr/lib/systemd/system-generators/torcx-generator[1494]: time="2023-10-02T20:20:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:20:39.366730 /usr/lib/systemd/system-generators/torcx-generator[1494]: time="2023-10-02T20:20:39Z" level=info msg="torcx already run" Oct 2 20:20:39.415731 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:20:39.415739 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:20:39.428062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.474000 audit: BPF prog-id=44 op=LOAD Oct 2 20:20:39.474000 audit: BPF prog-id=29 op=UNLOAD Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit: BPF prog-id=45 op=LOAD Oct 2 20:20:39.475000 audit: BPF prog-id=30 op=UNLOAD Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit: BPF prog-id=46 op=LOAD Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.475000 audit: BPF prog-id=47 op=LOAD Oct 2 20:20:39.475000 audit: BPF prog-id=31 op=UNLOAD Oct 2 20:20:39.475000 audit: BPF prog-id=32 op=UNLOAD Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit: BPF prog-id=48 op=LOAD Oct 2 20:20:39.476000 audit: BPF prog-id=33 op=UNLOAD Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit: BPF prog-id=49 op=LOAD Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit: BPF prog-id=50 op=LOAD Oct 2 20:20:39.476000 audit: BPF prog-id=34 op=UNLOAD Oct 2 20:20:39.476000 audit: BPF prog-id=35 op=UNLOAD Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit: BPF prog-id=51 op=LOAD Oct 2 20:20:39.477000 audit: BPF prog-id=36 op=UNLOAD Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit: BPF prog-id=52 op=LOAD Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.477000 audit: BPF prog-id=53 op=LOAD Oct 2 20:20:39.477000 audit: BPF prog-id=37 op=UNLOAD Oct 2 20:20:39.477000 audit: BPF prog-id=38 op=UNLOAD Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.478000 audit: BPF prog-id=54 op=LOAD Oct 2 20:20:39.478000 audit: BPF prog-id=39 op=UNLOAD Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit: BPF prog-id=55 op=LOAD Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit: BPF prog-id=56 op=LOAD Oct 2 20:20:39.479000 audit: BPF prog-id=40 op=UNLOAD Oct 2 20:20:39.479000 audit: BPF prog-id=41 op=UNLOAD Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit: BPF prog-id=57 op=LOAD Oct 2 20:20:39.480000 audit: BPF prog-id=42 op=UNLOAD Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.480000 audit: BPF prog-id=58 op=LOAD Oct 2 20:20:39.480000 audit: BPF prog-id=43 op=UNLOAD Oct 2 20:20:39.487619 systemd[1]: Started kubelet.service. Oct 2 20:20:39.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:39.508977 kubelet[1549]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:20:39.508977 kubelet[1549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:20:39.508977 kubelet[1549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:20:39.509535 kubelet[1549]: I1002 20:20:39.509494 1549 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 20:20:39.511661 kubelet[1549]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:20:39.511661 kubelet[1549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:20:39.511661 kubelet[1549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:20:39.633620 kubelet[1549]: I1002 20:20:39.633591 1549 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 20:20:39.633620 kubelet[1549]: I1002 20:20:39.633621 1549 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 20:20:39.633752 kubelet[1549]: I1002 20:20:39.633717 1549 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 20:20:39.645025 kubelet[1549]: I1002 20:20:39.644984 1549 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 20:20:39.666885 kubelet[1549]: I1002 20:20:39.666840 1549 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 20:20:39.667029 kubelet[1549]: I1002 20:20:39.666993 1549 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 20:20:39.667767 kubelet[1549]: I1002 20:20:39.667731 1549 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 20:20:39.667767 kubelet[1549]: I1002 20:20:39.667743 1549 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 20:20:39.667767 kubelet[1549]: I1002 20:20:39.667750 1549 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 20:20:39.668057 kubelet[1549]: I1002 20:20:39.668023 1549 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:20:39.674268 kubelet[1549]: I1002 20:20:39.674259 1549 kubelet.go:381] "Attempting to sync node with API server" Oct 2 20:20:39.674306 kubelet[1549]: I1002 20:20:39.674272 1549 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 20:20:39.674306 kubelet[1549]: I1002 20:20:39.674297 1549 kubelet.go:281] "Adding apiserver pod source" Oct 2 20:20:39.674306 kubelet[1549]: I1002 20:20:39.674305 1549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 20:20:39.674390 kubelet[1549]: E1002 20:20:39.674311 1549 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:39.674390 kubelet[1549]: E1002 20:20:39.674358 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:39.675977 kubelet[1549]: I1002 20:20:39.675952 1549 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 20:20:39.677708 kubelet[1549]: W1002 20:20:39.677700 1549 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 20:20:39.678383 kubelet[1549]: I1002 20:20:39.678375 1549 server.go:1175] "Started kubelet" Oct 2 20:20:39.678621 kubelet[1549]: I1002 20:20:39.678594 1549 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 20:20:39.682306 kubelet[1549]: E1002 20:20:39.682292 1549 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 20:20:39.682351 kubelet[1549]: E1002 20:20:39.682345 1549 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 20:20:39.683299 kubelet[1549]: I1002 20:20:39.683292 1549 server.go:438] "Adding debug handlers to kubelet server" Oct 2 20:20:39.683000 audit[1549]: AVC avc: denied { mac_admin } for pid=1549 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.683000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:20:39.683000 audit[1549]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001160240 a1=c0000efbf0 a2=c001160210 a3=25 items=0 ppid=1 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.683000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:20:39.683000 audit[1549]: AVC avc: denied { mac_admin } for pid=1549 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.683000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:20:39.683000 audit[1549]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00116c1c0 a1=c0000efc08 a2=c0011602d0 a3=25 items=0 ppid=1 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.683000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:20:39.683837 kubelet[1549]: I1002 20:20:39.683545 1549 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 20:20:39.683837 kubelet[1549]: I1002 20:20:39.683564 1549 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 20:20:39.683837 kubelet[1549]: I1002 20:20:39.683665 1549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 20:20:39.685905 kubelet[1549]: I1002 20:20:39.685869 1549 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 20:20:39.685937 kubelet[1549]: I1002 20:20:39.685920 1549 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 20:20:39.686331 kubelet[1549]: E1002 20:20:39.686321 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:20:39.689234 kubelet[1549]: E1002 20:20:39.689221 1549 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.124.211\" not found" node="10.67.124.211" Oct 2 20:20:39.695330 kubelet[1549]: I1002 20:20:39.695293 1549 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 20:20:39.695330 kubelet[1549]: I1002 20:20:39.695302 1549 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 20:20:39.695330 kubelet[1549]: I1002 20:20:39.695310 1549 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:20:39.695997 kubelet[1549]: I1002 20:20:39.695961 1549 policy_none.go:49] "None policy: Start" Oct 2 20:20:39.696229 kubelet[1549]: I1002 20:20:39.696199 1549 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 20:20:39.696229 kubelet[1549]: I1002 20:20:39.696209 1549 state_mem.go:35] "Initializing new in-memory state store" Oct 2 20:20:39.698598 systemd[1]: Created slice kubepods.slice. Oct 2 20:20:39.700657 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 20:20:39.702033 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 20:20:39.703000 audit[1575]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.703000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc10890e50 a2=0 a3=7ffc10890e3c items=0 ppid=1549 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:20:39.703000 audit[1578]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.703000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fffb3d90110 a2=0 a3=7fffb3d900fc items=0 ppid=1549 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:20:39.722980 kubelet[1549]: I1002 20:20:39.722968 1549 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 20:20:39.723019 kubelet[1549]: I1002 20:20:39.722999 1549 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 20:20:39.722000 audit[1549]: AVC avc: denied { mac_admin } for pid=1549 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:39.722000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:20:39.722000 audit[1549]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000728780 a1=c000742600 a2=c000728720 a3=25 items=0 ppid=1 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.722000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:20:39.723162 kubelet[1549]: I1002 20:20:39.723105 1549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 20:20:39.723326 kubelet[1549]: E1002 20:20:39.723317 1549 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.211\" not found" Oct 2 20:20:39.704000 audit[1580]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.704000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeded76b80 a2=0 a3=7ffeded76b6c items=0 ppid=1549 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:20:39.736000 audit[1585]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.736000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffedc119790 a2=0 a3=7ffedc11977c items=0 ppid=1549 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:20:39.779000 audit[1590]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.779000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc4707b3a0 a2=0 a3=7ffc4707b38c items=0 ppid=1549 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 20:20:39.780000 audit[1591]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.780000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe2d34d200 a2=0 a3=7ffe2d34d1ec items=0 ppid=1549 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:20:39.783823 kubelet[1549]: E1002 20:20:39.783811 1549 kubelet.go:2448] "Error getting node" err="node \"10.67.124.211\" not found" Oct 2 20:20:39.784119 kubelet[1549]: I1002 20:20:39.784111 1549 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.211" Oct 2 20:20:39.783000 audit[1594]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.783000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc48581c40 a2=0 a3=7ffc48581c2c items=0 ppid=1549 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:20:39.786000 audit[1597]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.786000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fffca37b3b0 a2=0 a3=7fffca37b39c items=0 ppid=1549 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:20:39.786000 audit[1598]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.786000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe77cafe70 a2=0 a3=7ffe77cafe5c items=0 ppid=1549 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:20:39.787000 audit[1599]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.787000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2c4a33f0 a2=0 a3=7fff2c4a33dc items=0 ppid=1549 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:20:39.788000 audit[1601]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.788000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffccf75bd00 a2=0 a3=7ffccf75bcec items=0 ppid=1549 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:20:39.790000 audit[1603]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.790000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffc2fbb0e0 a2=0 a3=7fffc2fbb0cc items=0 ppid=1549 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.790000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:20:39.847000 audit[1606]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.847000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffddecefc40 a2=0 a3=7ffddecefc2c items=0 ppid=1549 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:20:39.849000 audit[1608]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.849000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc08f36c10 a2=0 a3=7ffc08f36bfc items=0 ppid=1549 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:20:39.857000 audit[1611]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.857000 audit[1611]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fff16201e90 a2=0 a3=7fff16201e7c items=0 ppid=1549 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:20:39.857795 kubelet[1549]: I1002 20:20:39.857746 1549 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 20:20:39.857000 audit[1612]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.857000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc70dfd380 a2=0 a3=7ffc70dfd36c items=0 ppid=1549 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:20:39.858000 audit[1613]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.858000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc30a875f0 a2=0 a3=7ffc30a875dc items=0 ppid=1549 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:20:39.858000 audit[1614]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.858000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd6fa72990 a2=0 a3=7ffd6fa7297c items=0 ppid=1549 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:20:39.858000 audit[1615]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.858000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3b39cdf0 a2=0 a3=7ffe3b39cddc items=0 ppid=1549 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:20:39.859000 audit[1617]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:39.859000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4bb77ef0 a2=0 a3=7fff4bb77edc items=0 ppid=1549 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:20:39.860000 audit[1618]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.860000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffbfdcc390 a2=0 a3=7fffbfdcc37c items=0 ppid=1549 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:20:39.860000 audit[1619]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.860000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fffdf151830 a2=0 a3=7fffdf15181c items=0 ppid=1549 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:20:39.862000 audit[1621]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.862000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffed93f33a0 a2=0 a3=7ffed93f338c items=0 ppid=1549 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:20:39.862000 audit[1622]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.862000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdba6e14a0 a2=0 a3=7ffdba6e148c items=0 ppid=1549 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:20:39.863000 audit[1623]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.863000 audit[1623]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff39eb1460 a2=0 a3=7fff39eb144c items=0 ppid=1549 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:20:39.864000 audit[1625]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.864000 audit[1625]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe1e412430 a2=0 a3=7ffe1e41241c items=0 ppid=1549 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:20:39.865000 audit[1627]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.865000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe99e722d0 a2=0 a3=7ffe99e722bc items=0 ppid=1549 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:20:39.867000 audit[1629]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.867000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe68de51c0 a2=0 a3=7ffe68de51ac items=0 ppid=1549 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:20:39.868000 audit[1631]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.868000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fffe73a5860 a2=0 a3=7fffe73a584c items=0 ppid=1549 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:20:39.870000 audit[1633]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.870000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffdf54f77b0 a2=0 a3=7ffdf54f779c items=0 ppid=1549 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:20:39.871619 kubelet[1549]: I1002 20:20:39.871555 1549 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 20:20:39.871619 kubelet[1549]: I1002 20:20:39.871569 1549 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 20:20:39.871619 kubelet[1549]: I1002 20:20:39.871583 1549 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 20:20:39.871619 kubelet[1549]: E1002 20:20:39.871618 1549 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 20:20:39.871000 audit[1634]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.871000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc57593f40 a2=0 a3=7ffc57593f2c items=0 ppid=1549 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:20:39.872000 audit[1635]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.872000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda1a1a690 a2=0 a3=7ffda1a1a67c items=0 ppid=1549 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:20:39.872000 audit[1636]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:39.872000 audit[1636]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0ee21f80 a2=0 a3=7ffc0ee21f6c items=0 ppid=1549 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:39.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:20:39.882232 kubelet[1549]: I1002 20:20:39.882154 1549 kubelet_node_status.go:73] "Successfully registered node" node="10.67.124.211" Oct 2 20:20:39.985385 kubelet[1549]: I1002 20:20:39.985149 1549 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 20:20:39.986203 env[1156]: time="2023-10-02T20:20:39.986074373Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 20:20:39.986906 kubelet[1549]: I1002 20:20:39.986651 1549 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 20:20:39.987476 kubelet[1549]: E1002 20:20:39.987421 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:20:40.675207 kubelet[1549]: I1002 20:20:40.675091 1549 apiserver.go:52] "Watching apiserver" Oct 2 20:20:40.675207 kubelet[1549]: E1002 20:20:40.675117 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:40.879428 kubelet[1549]: I1002 20:20:40.879408 1549 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:20:40.879542 kubelet[1549]: I1002 20:20:40.879461 1549 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:20:40.884034 systemd[1]: Created slice kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice. Oct 2 20:20:40.892670 kubelet[1549]: I1002 20:20:40.892622 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a00ee5fd-39b4-41c8-96f3-e385a41dd7fc-kube-proxy\") pod \"kube-proxy-z4bsd\" (UID: \"a00ee5fd-39b4-41c8-96f3-e385a41dd7fc\") " pod="kube-system/kube-proxy-z4bsd" Oct 2 20:20:40.892670 kubelet[1549]: I1002 20:20:40.892660 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-bpf-maps\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892786 kubelet[1549]: I1002 20:20:40.892689 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hostproc\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892786 kubelet[1549]: I1002 20:20:40.892718 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-clustermesh-secrets\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892786 kubelet[1549]: I1002 20:20:40.892743 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-config-path\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892786 kubelet[1549]: I1002 20:20:40.892767 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hubble-tls\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892961 kubelet[1549]: I1002 20:20:40.892790 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-run\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892961 kubelet[1549]: I1002 20:20:40.892828 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-lib-modules\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892961 kubelet[1549]: I1002 20:20:40.892857 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-xtables-lock\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892961 kubelet[1549]: I1002 20:20:40.892882 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-net\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892961 kubelet[1549]: I1002 20:20:40.892907 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-kernel\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.892961 kubelet[1549]: I1002 20:20:40.892931 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00ee5fd-39b4-41c8-96f3-e385a41dd7fc-xtables-lock\") pod \"kube-proxy-z4bsd\" (UID: \"a00ee5fd-39b4-41c8-96f3-e385a41dd7fc\") " pod="kube-system/kube-proxy-z4bsd" Oct 2 20:20:40.893179 kubelet[1549]: I1002 20:20:40.892957 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00ee5fd-39b4-41c8-96f3-e385a41dd7fc-lib-modules\") pod \"kube-proxy-z4bsd\" (UID: \"a00ee5fd-39b4-41c8-96f3-e385a41dd7fc\") " pod="kube-system/kube-proxy-z4bsd" Oct 2 20:20:40.893179 kubelet[1549]: I1002 20:20:40.892982 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m77gh\" (UniqueName: \"kubernetes.io/projected/a00ee5fd-39b4-41c8-96f3-e385a41dd7fc-kube-api-access-m77gh\") pod \"kube-proxy-z4bsd\" (UID: \"a00ee5fd-39b4-41c8-96f3-e385a41dd7fc\") " pod="kube-system/kube-proxy-z4bsd" Oct 2 20:20:40.893179 kubelet[1549]: I1002 20:20:40.893005 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cni-path\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.893179 kubelet[1549]: I1002 20:20:40.893029 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-cgroup\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.893179 kubelet[1549]: I1002 20:20:40.893103 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-etc-cni-netd\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.893357 kubelet[1549]: I1002 20:20:40.893174 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkdqg\" (UniqueName: \"kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-kube-api-access-dkdqg\") pod \"cilium-slp8v\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " pod="kube-system/cilium-slp8v" Oct 2 20:20:40.893357 kubelet[1549]: I1002 20:20:40.893200 1549 reconciler.go:169] "Reconciler: start to sync state" Oct 2 20:20:40.909873 systemd[1]: Created slice kubepods-besteffort-poda00ee5fd_39b4_41c8_96f3_e385a41dd7fc.slice. Oct 2 20:20:41.676289 kubelet[1549]: E1002 20:20:41.676151 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:41.995187 kubelet[1549]: E1002 20:20:41.994972 1549 configmap.go:197] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 2 20:20:41.995453 kubelet[1549]: E1002 20:20:41.995206 1549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a00ee5fd-39b4-41c8-96f3-e385a41dd7fc-kube-proxy podName:a00ee5fd-39b4-41c8-96f3-e385a41dd7fc nodeName:}" failed. No retries permitted until 2023-10-02 20:20:42.495150622 +0000 UTC m=+3.005943116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a00ee5fd-39b4-41c8-96f3-e385a41dd7fc-kube-proxy") pod "kube-proxy-z4bsd" (UID: "a00ee5fd-39b4-41c8-96f3-e385a41dd7fc") : failed to sync configmap cache: timed out waiting for the condition Oct 2 20:20:42.075453 kubelet[1549]: I1002 20:20:42.075331 1549 request.go:690] Waited for 1.19543535s due to client-side throttling, not priority and fairness, request: GET:https://145.40.82.213:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0 Oct 2 20:20:42.677288 kubelet[1549]: E1002 20:20:42.677185 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:42.708393 env[1156]: time="2023-10-02T20:20:42.708263602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slp8v,Uid:88c1dbe0-b003-4872-b7fa-d3b14f1b6fde,Namespace:kube-system,Attempt:0,}" Oct 2 20:20:42.732116 env[1156]: time="2023-10-02T20:20:42.731983227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4bsd,Uid:a00ee5fd-39b4-41c8-96f3-e385a41dd7fc,Namespace:kube-system,Attempt:0,}" Oct 2 20:20:43.417193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633838493.mount: Deactivated successfully. Oct 2 20:20:43.418753 env[1156]: time="2023-10-02T20:20:43.418706425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.419787 env[1156]: time="2023-10-02T20:20:43.419754404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.420324 env[1156]: time="2023-10-02T20:20:43.420283662Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.421079 env[1156]: time="2023-10-02T20:20:43.421039018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.421444 env[1156]: time="2023-10-02T20:20:43.421396796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.423271 env[1156]: time="2023-10-02T20:20:43.423230663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.424594 env[1156]: time="2023-10-02T20:20:43.424554962Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.425265 env[1156]: time="2023-10-02T20:20:43.425226830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:43.431676 env[1156]: time="2023-10-02T20:20:43.431615423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:20:43.431676 env[1156]: time="2023-10-02T20:20:43.431636144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:20:43.431676 env[1156]: time="2023-10-02T20:20:43.431642910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:20:43.431783 env[1156]: time="2023-10-02T20:20:43.431706647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37 pid=1657 runtime=io.containerd.runc.v2 Oct 2 20:20:43.432285 env[1156]: time="2023-10-02T20:20:43.432260165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:20:43.432285 env[1156]: time="2023-10-02T20:20:43.432277484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:20:43.432342 env[1156]: time="2023-10-02T20:20:43.432286929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:20:43.432363 env[1156]: time="2023-10-02T20:20:43.432347489Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ab5f1f6e1ee2e09ebd4bf6ae5bcaa483f5d19e1cff3f4485966b02fc786bf7c pid=1665 runtime=io.containerd.runc.v2 Oct 2 20:20:43.437577 systemd[1]: Started cri-containerd-7ab5f1f6e1ee2e09ebd4bf6ae5bcaa483f5d19e1cff3f4485966b02fc786bf7c.scope. Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.448656 kernel: kauditd_printk_skb: 472 callbacks suppressed Oct 2 20:20:43.448686 kernel: audit: type=1400 audit(1696278043.442:528): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.505388 systemd[1]: Started cri-containerd-13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37.scope. Oct 2 20:20:43.558695 kernel: audit: type=1400 audit(1696278043.442:529): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.558735 kernel: audit: type=1400 audit(1696278043.442:530): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.614614 kernel: audit: type=1400 audit(1696278043.442:531): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.672030 kernel: audit: type=1400 audit(1696278043.442:532): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.677936 kubelet[1549]: E1002 20:20:43.677893 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:43.731189 kernel: audit: type=1400 audit(1696278043.442:533): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.791929 kernel: audit: type=1400 audit(1696278043.442:534): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.791952 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853959 kernel: audit: type=1400 audit(1696278043.442:535): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853983 kernel: audit: type=1400 audit(1696278043.442:536): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit: BPF prog-id=59 op=LOAD Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001c5c48 a2=10 a3=1c items=0 ppid=1665 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761623566316636653165653265303965626434626636616535626361 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001c56b0 a2=3c a3=c items=0 ppid=1665 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761623566316636653165653265303965626434626636616535626361 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.503000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit: BPF prog-id=60 op=LOAD Oct 2 20:20:43.503000 audit: BPF prog-id=61 op=LOAD Oct 2 20:20:43.503000 audit[1680]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001c59d8 a2=78 a3=c00028f010 items=0 ppid=1665 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761623566316636653165653265303965626434626636616535626361 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1657 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653964626164363931393434656330393036316464326663366663 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1657 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653964626164363931393434656330393036316464326663366663 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit: BPF prog-id=62 op=LOAD Oct 2 20:20:43.671000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.671000 audit[1680]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001c5770 a2=78 a3=c00028f058 items=0 ppid=1665 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761623566316636653165653265303965626434626636616535626361 Oct 2 20:20:43.853000 audit: BPF prog-id=62 op=UNLOAD Oct 2 20:20:43.853000 audit: BPF prog-id=61 op=UNLOAD Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { perfmon } for pid=1680 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit[1680]: AVC avc: denied { bpf } for pid=1680 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:43.853000 audit: BPF prog-id=64 op=LOAD Oct 2 20:20:43.853000 audit[1680]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001c5c30 a2=78 a3=c00028f468 items=0 ppid=1665 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761623566316636653165653265303965626434626636616535626361 Oct 2 20:20:43.671000 audit[1678]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000210bc0 items=0 ppid=1657 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:43.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653964626164363931393434656330393036316464326663366663 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.004000 audit: BPF prog-id=65 op=LOAD Oct 2 20:20:44.004000 audit[1678]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000210c08 items=0 ppid=1657 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:44.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653964626164363931393434656330393036316464326663366663 Oct 2 20:20:44.004000 audit: BPF prog-id=65 op=UNLOAD Oct 2 20:20:44.004000 audit: BPF prog-id=63 op=UNLOAD Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { perfmon } for pid=1678 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit[1678]: AVC avc: denied { bpf } for pid=1678 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:44.005000 audit: BPF prog-id=66 op=LOAD Oct 2 20:20:44.005000 audit[1678]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000211018 items=0 ppid=1657 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:44.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653964626164363931393434656330393036316464326663366663 Oct 2 20:20:44.010415 env[1156]: time="2023-10-02T20:20:44.010353289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4bsd,Uid:a00ee5fd-39b4-41c8-96f3-e385a41dd7fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ab5f1f6e1ee2e09ebd4bf6ae5bcaa483f5d19e1cff3f4485966b02fc786bf7c\"" Oct 2 20:20:44.011318 env[1156]: time="2023-10-02T20:20:44.011306302Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 20:20:44.022317 env[1156]: time="2023-10-02T20:20:44.022299910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slp8v,Uid:88c1dbe0-b003-4872-b7fa-d3b14f1b6fde,Namespace:kube-system,Attempt:0,} returns sandbox id \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\"" Oct 2 20:20:44.678199 kubelet[1549]: E1002 20:20:44.678176 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:44.724361 kubelet[1549]: E1002 20:20:44.724353 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:20:44.902047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853753266.mount: Deactivated successfully. Oct 2 20:20:45.008000 audit[1301]: USER_END pid=1301 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:45.008000 audit[1301]: CRED_DISP pid=1301 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:20:45.008873 sudo[1301]: pam_unix(sudo:session): session closed for user root Oct 2 20:20:45.009616 sshd[1297]: pam_unix(sshd:session): session closed for user core Oct 2 20:20:45.009000 audit[1297]: USER_END pid=1297 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:45.009000 audit[1297]: CRED_DISP pid=1297 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 20:20:45.011217 systemd[1]: sshd@6-139.178.89.245:22-139.178.89.65:55540.service: Deactivated successfully. Oct 2 20:20:45.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.89.245:22-139.178.89.65:55540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:20:45.011644 systemd[1]: session-9.scope: Deactivated successfully. Oct 2 20:20:45.012077 systemd-logind[1146]: Session 9 logged out. Waiting for processes to exit. Oct 2 20:20:45.012458 systemd-logind[1146]: Removed session 9. Oct 2 20:20:45.177088 env[1156]: time="2023-10-02T20:20:45.177031134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:45.177623 env[1156]: time="2023-10-02T20:20:45.177583937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:45.178145 env[1156]: time="2023-10-02T20:20:45.178107347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:45.179831 env[1156]: time="2023-10-02T20:20:45.179812507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:45.179985 env[1156]: time="2023-10-02T20:20:45.179973546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 20:20:45.180406 env[1156]: time="2023-10-02T20:20:45.180362132Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 20:20:45.181289 env[1156]: time="2023-10-02T20:20:45.181274386Z" level=info msg="CreateContainer within sandbox \"7ab5f1f6e1ee2e09ebd4bf6ae5bcaa483f5d19e1cff3f4485966b02fc786bf7c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 20:20:45.187128 env[1156]: time="2023-10-02T20:20:45.187086912Z" level=info msg="CreateContainer within sandbox \"7ab5f1f6e1ee2e09ebd4bf6ae5bcaa483f5d19e1cff3f4485966b02fc786bf7c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44bfc6a8798038547d94688bbe12382d286304a94c2e4b89e2fa5f2999b0fca8\"" Oct 2 20:20:45.187359 env[1156]: time="2023-10-02T20:20:45.187346434Z" level=info msg="StartContainer for \"44bfc6a8798038547d94688bbe12382d286304a94c2e4b89e2fa5f2999b0fca8\"" Oct 2 20:20:45.207637 systemd[1]: Started cri-containerd-44bfc6a8798038547d94688bbe12382d286304a94c2e4b89e2fa5f2999b0fca8.scope. Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1665 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434626663366138373938303338353437643934363838626265313233 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit: BPF prog-id=67 op=LOAD Oct 2 20:20:45.215000 audit[1732]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00027e2f0 items=0 ppid=1665 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434626663366138373938303338353437643934363838626265313233 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit: BPF prog-id=68 op=LOAD Oct 2 20:20:45.215000 audit[1732]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00027e338 items=0 ppid=1665 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434626663366138373938303338353437643934363838626265313233 Oct 2 20:20:45.215000 audit: BPF prog-id=68 op=UNLOAD Oct 2 20:20:45.215000 audit: BPF prog-id=67 op=UNLOAD Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { perfmon } for pid=1732 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit[1732]: AVC avc: denied { bpf } for pid=1732 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:20:45.215000 audit: BPF prog-id=69 op=LOAD Oct 2 20:20:45.215000 audit[1732]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00027e3c8 items=0 ppid=1665 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434626663366138373938303338353437643934363838626265313233 Oct 2 20:20:45.239938 env[1156]: time="2023-10-02T20:20:45.239913455Z" level=info msg="StartContainer for \"44bfc6a8798038547d94688bbe12382d286304a94c2e4b89e2fa5f2999b0fca8\" returns successfully" Oct 2 20:20:45.320805 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 20:20:45.320863 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 20:20:45.320875 kernel: IPVS: ipvs loaded. Oct 2 20:20:45.373459 kernel: IPVS: [rr] scheduler registered. Oct 2 20:20:45.401408 kernel: IPVS: [wrr] scheduler registered. Oct 2 20:20:45.429470 kernel: IPVS: [sh] scheduler registered. Oct 2 20:20:45.642000 audit[1800]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.642000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3133aa30 a2=0 a3=7ffc3133aa1c items=0 ppid=1742 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.642000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:20:45.643000 audit[1801]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.643000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe26041740 a2=0 a3=7ffe2604172c items=0 ppid=1742 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:20:45.645000 audit[1802]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.645000 audit[1802]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb55354f0 a2=0 a3=7ffcb55354dc items=0 ppid=1742 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:20:45.646000 audit[1803]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.646000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4a7a3580 a2=0 a3=7ffd4a7a356c items=0 ppid=1742 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:20:45.648000 audit[1804]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.648000 audit[1804]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaa7889e0 a2=0 a3=7ffdaa7889cc items=0 ppid=1742 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:20:45.649000 audit[1805]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.649000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6e9b7720 a2=0 a3=7ffc6e9b770c items=0 ppid=1742 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:20:45.679501 kubelet[1549]: E1002 20:20:45.679368 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:45.756000 audit[1807]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.756000 audit[1807]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe03e31230 a2=0 a3=7ffe03e3121c items=0 ppid=1742 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:20:45.763000 audit[1809]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.763000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc8283770 a2=0 a3=7fffc828375c items=0 ppid=1742 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 20:20:45.773000 audit[1812]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.773000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcb833adf0 a2=0 a3=7ffcb833addc items=0 ppid=1742 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 20:20:45.776000 audit[1813]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.776000 audit[1813]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff77192530 a2=0 a3=7fff7719251c items=0 ppid=1742 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:20:45.782000 audit[1815]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.782000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdede7efe0 a2=0 a3=7ffdede7efcc items=0 ppid=1742 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:20:45.785000 audit[1816]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.785000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0b894350 a2=0 a3=7ffc0b89433c items=0 ppid=1742 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:20:45.792000 audit[1818]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.792000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8c8e3dd0 a2=0 a3=7ffc8c8e3dbc items=0 ppid=1742 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:20:45.802000 audit[1821]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.802000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd101f9ff0 a2=0 a3=7ffd101f9fdc items=0 ppid=1742 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 20:20:45.805000 audit[1822]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.805000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce45bd8d0 a2=0 a3=7ffce45bd8bc items=0 ppid=1742 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:20:45.812000 audit[1824]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.812000 audit[1824]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca75c2570 a2=0 a3=7ffca75c255c items=0 ppid=1742 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:20:45.815000 audit[1825]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.815000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaf8b18e0 a2=0 a3=7ffdaf8b18cc items=0 ppid=1742 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:20:45.822000 audit[1827]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.822000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe34ba3ed0 a2=0 a3=7ffe34ba3ebc items=0 ppid=1742 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:20:45.832000 audit[1830]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.832000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffbe254840 a2=0 a3=7fffbe25482c items=0 ppid=1742 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:20:45.842000 audit[1833]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.842000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd50756820 a2=0 a3=7ffd5075680c items=0 ppid=1742 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:20:45.845000 audit[1834]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.845000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc557843f0 a2=0 a3=7ffc557843dc items=0 ppid=1742 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:20:45.851000 audit[1836]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.851000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd4084e600 a2=0 a3=7ffd4084e5ec items=0 ppid=1742 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:20:45.860000 audit[1839]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:20:45.860000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd825b9120 a2=0 a3=7ffd825b910c items=0 ppid=1742 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:20:45.894000 audit[1843]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:20:45.894000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe8c604400 a2=0 a3=7ffe8c6043ec items=0 ppid=1742 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:20:45.919000 audit[1843]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:20:45.919000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe8c604400 a2=0 a3=7ffe8c6043ec items=0 ppid=1742 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:20:45.923000 audit[1847]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.923000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd73b30380 a2=0 a3=7ffd73b3036c items=0 ppid=1742 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:20:45.930000 audit[1849]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1849 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.930000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff8e1914b0 a2=0 a3=7fff8e19149c items=0 ppid=1742 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 20:20:45.939000 audit[1852]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.939000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe9a106270 a2=0 a3=7ffe9a10625c items=0 ppid=1742 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 20:20:45.942000 audit[1853]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.942000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc671b41f0 a2=0 a3=7ffc671b41dc items=0 ppid=1742 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:20:45.949000 audit[1855]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.949000 audit[1855]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc12728d0 a2=0 a3=7fffc12728bc items=0 ppid=1742 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:20:45.952000 audit[1856]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.952000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe954e7e80 a2=0 a3=7ffe954e7e6c items=0 ppid=1742 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:20:45.959000 audit[1858]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1858 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.959000 audit[1858]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffde54c3490 a2=0 a3=7ffde54c347c items=0 ppid=1742 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 20:20:45.969000 audit[1861]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.969000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe2ba861d0 a2=0 a3=7ffe2ba861bc items=0 ppid=1742 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:20:45.972000 audit[1862]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.972000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc5c2b7f0 a2=0 a3=7ffcc5c2b7dc items=0 ppid=1742 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:20:45.978000 audit[1864]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1864 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.978000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0f17fd60 a2=0 a3=7ffe0f17fd4c items=0 ppid=1742 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.978000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:20:45.981000 audit[1865]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.981000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe51f9ab0 a2=0 a3=7fffe51f9a9c items=0 ppid=1742 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:20:45.988000 audit[1867]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.988000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe8a3d850 a2=0 a3=7fffe8a3d83c items=0 ppid=1742 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.988000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:20:45.998000 audit[1870]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:45.998000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf4609c50 a2=0 a3=7ffcf4609c3c items=0 ppid=1742 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:45.998000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:20:46.008000 audit[1873]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1873 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:46.008000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2f35d3c0 a2=0 a3=7fff2f35d3ac items=0 ppid=1742 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:46.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 20:20:46.011000 audit[1874]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:46.011000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc919e4d50 a2=0 a3=7ffc919e4d3c items=0 ppid=1742 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:46.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:20:46.017000 audit[1876]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:46.017000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffece77b1f0 a2=0 a3=7ffece77b1dc items=0 ppid=1742 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:46.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:20:46.026000 audit[1879]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:20:46.026000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffecc80a980 a2=0 a3=7ffecc80a96c items=0 ppid=1742 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:46.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:20:46.041000 audit[1883]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:20:46.041000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd46439580 a2=0 a3=7ffd4643956c items=0 ppid=1742 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:46.041000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:20:46.042000 audit[1883]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:20:46.042000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffd46439580 a2=0 a3=7ffd4643956c items=0 ppid=1742 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:20:46.042000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:20:46.679916 kubelet[1549]: E1002 20:20:46.679873 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:47.680854 kubelet[1549]: E1002 20:20:47.680809 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:48.593160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640963543.mount: Deactivated successfully. Oct 2 20:20:48.681959 kubelet[1549]: E1002 20:20:48.681912 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:49.635947 kubelet[1549]: I1002 20:20:49.635895 1549 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 20:20:49.682662 kubelet[1549]: E1002 20:20:49.682620 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:49.725062 kubelet[1549]: E1002 20:20:49.725044 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:20:50.242679 env[1156]: time="2023-10-02T20:20:50.242638148Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:50.243234 env[1156]: time="2023-10-02T20:20:50.243206424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:50.244125 env[1156]: time="2023-10-02T20:20:50.244078388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:20:50.244896 env[1156]: time="2023-10-02T20:20:50.244855257Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 20:20:50.245924 env[1156]: time="2023-10-02T20:20:50.245879152Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:20:50.250305 env[1156]: time="2023-10-02T20:20:50.250289543Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\"" Oct 2 20:20:50.250624 env[1156]: time="2023-10-02T20:20:50.250612865Z" level=info msg="StartContainer for \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\"" Oct 2 20:20:50.272454 systemd[1]: Started cri-containerd-0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1.scope. Oct 2 20:20:50.277260 systemd[1]: cri-containerd-0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1.scope: Deactivated successfully. Oct 2 20:20:50.277425 systemd[1]: Stopped cri-containerd-0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1.scope. Oct 2 20:20:50.279705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1-rootfs.mount: Deactivated successfully. Oct 2 20:20:50.683677 kubelet[1549]: E1002 20:20:50.683575 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:51.538083 env[1156]: time="2023-10-02T20:20:51.537932018Z" level=info msg="shim disconnected" id=0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1 Oct 2 20:20:51.538083 env[1156]: time="2023-10-02T20:20:51.538042599Z" level=warning msg="cleaning up after shim disconnected" id=0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1 namespace=k8s.io Oct 2 20:20:51.538083 env[1156]: time="2023-10-02T20:20:51.538069243Z" level=info msg="cleaning up dead shim" Oct 2 20:20:51.562301 env[1156]: time="2023-10-02T20:20:51.562282970Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:20:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1908 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:20:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:20:51.562481 env[1156]: time="2023-10-02T20:20:51.562429216Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Oct 2 20:20:51.562569 env[1156]: time="2023-10-02T20:20:51.562541885Z" level=error msg="Failed to pipe stdout of container \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\"" error="reading from a closed fifo" Oct 2 20:20:51.562631 env[1156]: time="2023-10-02T20:20:51.562611167Z" level=error msg="Failed to pipe stderr of container \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\"" error="reading from a closed fifo" Oct 2 20:20:51.571269 env[1156]: time="2023-10-02T20:20:51.571216007Z" level=error msg="StartContainer for \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:20:51.571395 kubelet[1549]: E1002 20:20:51.571383 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1" Oct 2 20:20:51.571498 kubelet[1549]: E1002 20:20:51.571464 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:20:51.571498 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:20:51.571498 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:20:51.571498 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dkdqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:20:51.571658 kubelet[1549]: E1002 20:20:51.571493 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:20:51.684231 kubelet[1549]: E1002 20:20:51.684096 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:51.905820 env[1156]: time="2023-10-02T20:20:51.905693043Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:20:51.920931 env[1156]: time="2023-10-02T20:20:51.920843847Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\"" Oct 2 20:20:51.921866 env[1156]: time="2023-10-02T20:20:51.921780521Z" level=info msg="StartContainer for \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\"" Oct 2 20:20:51.941727 systemd[1]: Started cri-containerd-3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321.scope. Oct 2 20:20:51.946413 systemd[1]: cri-containerd-3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321.scope: Deactivated successfully. Oct 2 20:20:51.946595 systemd[1]: Stopped cri-containerd-3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321.scope. Oct 2 20:20:51.948711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321-rootfs.mount: Deactivated successfully. Oct 2 20:20:51.951022 env[1156]: time="2023-10-02T20:20:51.950959958Z" level=info msg="shim disconnected" id=3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321 Oct 2 20:20:51.951022 env[1156]: time="2023-10-02T20:20:51.950994855Z" level=warning msg="cleaning up after shim disconnected" id=3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321 namespace=k8s.io Oct 2 20:20:51.951022 env[1156]: time="2023-10-02T20:20:51.951004535Z" level=info msg="cleaning up dead shim" Oct 2 20:20:51.967463 env[1156]: time="2023-10-02T20:20:51.967412667Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:20:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1944 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:20:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:20:51.967670 env[1156]: time="2023-10-02T20:20:51.967601734Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 20:20:51.967835 env[1156]: time="2023-10-02T20:20:51.967765814Z" level=error msg="Failed to pipe stdout of container \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\"" error="reading from a closed fifo" Oct 2 20:20:51.967835 env[1156]: time="2023-10-02T20:20:51.967767704Z" level=error msg="Failed to pipe stderr of container \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\"" error="reading from a closed fifo" Oct 2 20:20:51.968512 env[1156]: time="2023-10-02T20:20:51.968452112Z" level=error msg="StartContainer for \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:20:51.968655 kubelet[1549]: E1002 20:20:51.968614 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321" Oct 2 20:20:51.968730 kubelet[1549]: E1002 20:20:51.968694 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:20:51.968730 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:20:51.968730 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:20:51.968730 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dkdqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:20:51.968914 kubelet[1549]: E1002 20:20:51.968724 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:20:52.562047 systemd-timesyncd[1103]: Contacted time server [2607:f298:5:101d:f816:3eff:fefd:8817]:123 (2.flatcar.pool.ntp.org). Oct 2 20:20:52.562187 systemd-timesyncd[1103]: Initial clock synchronization to Mon 2023-10-02 20:20:52.748860 UTC. Oct 2 20:20:52.684533 kubelet[1549]: E1002 20:20:52.684423 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:52.907141 kubelet[1549]: I1002 20:20:52.907043 1549 scope.go:115] "RemoveContainer" containerID="0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1" Oct 2 20:20:52.907879 kubelet[1549]: I1002 20:20:52.907795 1549 scope.go:115] "RemoveContainer" containerID="0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1" Oct 2 20:20:52.909946 env[1156]: time="2023-10-02T20:20:52.909836219Z" level=info msg="RemoveContainer for \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\"" Oct 2 20:20:52.910655 env[1156]: time="2023-10-02T20:20:52.910562477Z" level=info msg="RemoveContainer for \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\"" Oct 2 20:20:52.910851 env[1156]: time="2023-10-02T20:20:52.910759843Z" level=error msg="RemoveContainer for \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\" failed" error="failed to set removing state for container \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\": container is already in removing state" Oct 2 20:20:52.911211 kubelet[1549]: E1002 20:20:52.911135 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\": container is already in removing state" containerID="0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1" Oct 2 20:20:52.911458 kubelet[1549]: E1002 20:20:52.911231 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1": container is already in removing state; Skipping pod "cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)" Oct 2 20:20:52.911957 kubelet[1549]: E1002 20:20:52.911873 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:20:52.913555 env[1156]: time="2023-10-02T20:20:52.913443144Z" level=info msg="RemoveContainer for \"0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1\" returns successfully" Oct 2 20:20:53.685791 kubelet[1549]: E1002 20:20:53.685681 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:53.913784 kubelet[1549]: E1002 20:20:53.913683 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:20:54.645774 kubelet[1549]: W1002 20:20:54.645604 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice/cri-containerd-0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1.scope WatchSource:0}: container "0e578234d569f94889fc68bceaede5968dfc08b2eec1711eb365485210cbb7a1" in namespace "k8s.io": not found Oct 2 20:20:54.686937 kubelet[1549]: E1002 20:20:54.686819 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:54.726545 kubelet[1549]: E1002 20:20:54.726443 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:20:55.687587 kubelet[1549]: E1002 20:20:55.687474 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:56.688814 kubelet[1549]: E1002 20:20:56.688707 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:57.689029 kubelet[1549]: E1002 20:20:57.688917 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:57.756803 kubelet[1549]: W1002 20:20:57.756716 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice/cri-containerd-3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321.scope WatchSource:0}: task 3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321 not found: not found Oct 2 20:20:58.689864 kubelet[1549]: E1002 20:20:58.689782 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:59.674931 kubelet[1549]: E1002 20:20:59.674853 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:59.690741 kubelet[1549]: E1002 20:20:59.690665 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:20:59.727573 kubelet[1549]: E1002 20:20:59.727508 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:00.691372 kubelet[1549]: E1002 20:21:00.691297 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:01.692058 kubelet[1549]: E1002 20:21:01.691988 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:02.692785 kubelet[1549]: E1002 20:21:02.692708 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:03.693684 kubelet[1549]: E1002 20:21:03.693576 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:04.694249 kubelet[1549]: E1002 20:21:04.694136 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:04.729581 kubelet[1549]: E1002 20:21:04.729478 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:05.695357 kubelet[1549]: E1002 20:21:05.695250 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:05.877264 env[1156]: time="2023-10-02T20:21:05.877163351Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:21:05.890476 env[1156]: time="2023-10-02T20:21:05.890372721Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\"" Oct 2 20:21:05.890768 env[1156]: time="2023-10-02T20:21:05.890753082Z" level=info msg="StartContainer for \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\"" Oct 2 20:21:05.913675 systemd[1]: Started cri-containerd-ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8.scope. Oct 2 20:21:05.918571 systemd[1]: cri-containerd-ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8.scope: Deactivated successfully. Oct 2 20:21:05.918793 systemd[1]: Stopped cri-containerd-ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8.scope. Oct 2 20:21:05.926526 env[1156]: time="2023-10-02T20:21:05.926461684Z" level=info msg="shim disconnected" id=ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8 Oct 2 20:21:05.926526 env[1156]: time="2023-10-02T20:21:05.926502094Z" level=warning msg="cleaning up after shim disconnected" id=ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8 namespace=k8s.io Oct 2 20:21:05.926526 env[1156]: time="2023-10-02T20:21:05.926510750Z" level=info msg="cleaning up dead shim" Oct 2 20:21:05.945247 env[1156]: time="2023-10-02T20:21:05.945158411Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:21:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1979 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:21:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:21:05.945677 env[1156]: time="2023-10-02T20:21:05.945524689Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 20:21:05.945925 env[1156]: time="2023-10-02T20:21:05.945858136Z" level=error msg="Failed to pipe stderr of container \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\"" error="reading from a closed fifo" Oct 2 20:21:05.946395 env[1156]: time="2023-10-02T20:21:05.946335878Z" level=error msg="Failed to pipe stdout of container \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\"" error="reading from a closed fifo" Oct 2 20:21:05.947319 env[1156]: time="2023-10-02T20:21:05.947230593Z" level=error msg="StartContainer for \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:21:05.947530 kubelet[1549]: E1002 20:21:05.947473 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8" Oct 2 20:21:05.947687 kubelet[1549]: E1002 20:21:05.947628 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:21:05.947687 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:21:05.947687 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:21:05.947687 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dkdqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:21:05.948000 kubelet[1549]: E1002 20:21:05.947705 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:21:05.983290 update_engine[1148]: I1002 20:21:05.983176 1148 update_attempter.cc:505] Updating boot flags... Oct 2 20:21:06.696580 kubelet[1549]: E1002 20:21:06.696507 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:06.889450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8-rootfs.mount: Deactivated successfully. Oct 2 20:21:06.946159 kubelet[1549]: I1002 20:21:06.946090 1549 scope.go:115] "RemoveContainer" containerID="3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321" Oct 2 20:21:06.947019 kubelet[1549]: I1002 20:21:06.946890 1549 scope.go:115] "RemoveContainer" containerID="3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321" Oct 2 20:21:06.948773 env[1156]: time="2023-10-02T20:21:06.948702860Z" level=info msg="RemoveContainer for \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\"" Oct 2 20:21:06.949670 env[1156]: time="2023-10-02T20:21:06.949590148Z" level=info msg="RemoveContainer for \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\"" Oct 2 20:21:06.949912 env[1156]: time="2023-10-02T20:21:06.949833809Z" level=error msg="RemoveContainer for \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\" failed" error="failed to set removing state for container \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\": container is already in removing state" Oct 2 20:21:06.950203 kubelet[1549]: E1002 20:21:06.950163 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\": container is already in removing state" containerID="3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321" Oct 2 20:21:06.950384 kubelet[1549]: E1002 20:21:06.950242 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321": container is already in removing state; Skipping pod "cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)" Oct 2 20:21:06.950928 kubelet[1549]: E1002 20:21:06.950893 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:21:06.952159 env[1156]: time="2023-10-02T20:21:06.952089082Z" level=info msg="RemoveContainer for \"3faf94912f0ae03d9dfa20a28faf6c604a2f1445d63b010a0efbd4f8a01b0321\" returns successfully" Oct 2 20:21:07.696861 kubelet[1549]: E1002 20:21:07.696752 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:08.697707 kubelet[1549]: E1002 20:21:08.697599 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:09.033636 kubelet[1549]: W1002 20:21:09.033445 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice/cri-containerd-ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8.scope WatchSource:0}: task ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8 not found: not found Oct 2 20:21:09.698786 kubelet[1549]: E1002 20:21:09.698678 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:09.730489 kubelet[1549]: E1002 20:21:09.730427 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:10.699028 kubelet[1549]: E1002 20:21:10.698917 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:11.700176 kubelet[1549]: E1002 20:21:11.700063 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:12.700615 kubelet[1549]: E1002 20:21:12.700503 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:13.701743 kubelet[1549]: E1002 20:21:13.701626 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:14.702881 kubelet[1549]: E1002 20:21:14.702771 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:14.731642 kubelet[1549]: E1002 20:21:14.731580 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:15.703333 kubelet[1549]: E1002 20:21:15.703224 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:16.704273 kubelet[1549]: E1002 20:21:16.704158 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:17.705450 kubelet[1549]: E1002 20:21:17.705337 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:18.705972 kubelet[1549]: E1002 20:21:18.705871 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:19.675278 kubelet[1549]: E1002 20:21:19.675168 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:19.706968 kubelet[1549]: E1002 20:21:19.706859 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:19.732549 kubelet[1549]: E1002 20:21:19.732488 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:20.707435 kubelet[1549]: E1002 20:21:20.707310 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:20.873811 kubelet[1549]: E1002 20:21:20.873717 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:21:21.708555 kubelet[1549]: E1002 20:21:21.708447 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:22.709338 kubelet[1549]: E1002 20:21:22.709233 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:23.710003 kubelet[1549]: E1002 20:21:23.709889 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:24.711245 kubelet[1549]: E1002 20:21:24.711132 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:24.733742 kubelet[1549]: E1002 20:21:24.733682 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:25.711458 kubelet[1549]: E1002 20:21:25.711352 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:26.711861 kubelet[1549]: E1002 20:21:26.711789 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:27.712800 kubelet[1549]: E1002 20:21:27.712694 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:28.713267 kubelet[1549]: E1002 20:21:28.713161 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:29.714004 kubelet[1549]: E1002 20:21:29.713891 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:29.734675 kubelet[1549]: E1002 20:21:29.734615 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:30.714889 kubelet[1549]: E1002 20:21:30.714780 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:31.716062 kubelet[1549]: E1002 20:21:31.715960 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:32.716309 kubelet[1549]: E1002 20:21:32.716195 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:33.717453 kubelet[1549]: E1002 20:21:33.717345 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:33.888586 env[1156]: time="2023-10-02T20:21:33.888484312Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:21:33.901775 env[1156]: time="2023-10-02T20:21:33.901648999Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\"" Oct 2 20:21:33.902509 env[1156]: time="2023-10-02T20:21:33.902471642Z" level=info msg="StartContainer for \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\"" Oct 2 20:21:33.923504 systemd[1]: Started cri-containerd-0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539.scope. Oct 2 20:21:33.928850 systemd[1]: cri-containerd-0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539.scope: Deactivated successfully. Oct 2 20:21:33.929040 systemd[1]: Stopped cri-containerd-0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539.scope. Oct 2 20:21:33.931258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539-rootfs.mount: Deactivated successfully. Oct 2 20:21:33.933697 env[1156]: time="2023-10-02T20:21:33.933626623Z" level=info msg="shim disconnected" id=0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539 Oct 2 20:21:33.933697 env[1156]: time="2023-10-02T20:21:33.933664298Z" level=warning msg="cleaning up after shim disconnected" id=0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539 namespace=k8s.io Oct 2 20:21:33.933697 env[1156]: time="2023-10-02T20:21:33.933671885Z" level=info msg="cleaning up dead shim" Oct 2 20:21:33.950663 env[1156]: time="2023-10-02T20:21:33.950600704Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2036 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:21:33Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:21:33.950900 env[1156]: time="2023-10-02T20:21:33.950825997Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:21:33.951058 env[1156]: time="2023-10-02T20:21:33.951013382Z" level=error msg="Failed to pipe stdout of container \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\"" error="reading from a closed fifo" Oct 2 20:21:33.951058 env[1156]: time="2023-10-02T20:21:33.951017008Z" level=error msg="Failed to pipe stderr of container \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\"" error="reading from a closed fifo" Oct 2 20:21:33.951869 env[1156]: time="2023-10-02T20:21:33.951802689Z" level=error msg="StartContainer for \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:21:33.952061 kubelet[1549]: E1002 20:21:33.952009 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539" Oct 2 20:21:33.952171 kubelet[1549]: E1002 20:21:33.952126 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:21:33.952171 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:21:33.952171 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:21:33.952171 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dkdqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:21:33.952370 kubelet[1549]: E1002 20:21:33.952172 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:21:34.015082 kubelet[1549]: I1002 20:21:34.014904 1549 scope.go:115] "RemoveContainer" containerID="ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8" Oct 2 20:21:34.015676 kubelet[1549]: I1002 20:21:34.015629 1549 scope.go:115] "RemoveContainer" containerID="ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8" Oct 2 20:21:34.017304 env[1156]: time="2023-10-02T20:21:34.017191616Z" level=info msg="RemoveContainer for \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\"" Oct 2 20:21:34.018545 env[1156]: time="2023-10-02T20:21:34.018436209Z" level=info msg="RemoveContainer for \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\"" Oct 2 20:21:34.018754 env[1156]: time="2023-10-02T20:21:34.018650304Z" level=error msg="RemoveContainer for \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\" failed" error="failed to set removing state for container \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\": container is already in removing state" Oct 2 20:21:34.019041 kubelet[1549]: E1002 20:21:34.018970 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\": container is already in removing state" containerID="ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8" Oct 2 20:21:34.019041 kubelet[1549]: E1002 20:21:34.019038 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8": container is already in removing state; Skipping pod "cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)" Oct 2 20:21:34.019805 kubelet[1549]: E1002 20:21:34.019734 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:21:34.020622 env[1156]: time="2023-10-02T20:21:34.020510585Z" level=info msg="RemoveContainer for \"ce9ea4427354952a7289448239af3476cc624a5d350cd219546b4dd629026db8\" returns successfully" Oct 2 20:21:34.717703 kubelet[1549]: E1002 20:21:34.717585 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:34.736284 kubelet[1549]: E1002 20:21:34.736175 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:35.718727 kubelet[1549]: E1002 20:21:35.718621 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:36.719108 kubelet[1549]: E1002 20:21:36.719027 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:37.042804 kubelet[1549]: W1002 20:21:37.042566 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice/cri-containerd-0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539.scope WatchSource:0}: task 0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539 not found: not found Oct 2 20:21:37.720204 kubelet[1549]: E1002 20:21:37.720124 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:38.721213 kubelet[1549]: E1002 20:21:38.721130 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:39.674927 kubelet[1549]: E1002 20:21:39.674853 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:39.721705 kubelet[1549]: E1002 20:21:39.721670 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:39.736652 kubelet[1549]: E1002 20:21:39.736618 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:40.722326 kubelet[1549]: E1002 20:21:40.722215 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:41.722793 kubelet[1549]: E1002 20:21:41.722688 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:42.723331 kubelet[1549]: E1002 20:21:42.723219 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:43.724563 kubelet[1549]: E1002 20:21:43.724459 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:44.724840 kubelet[1549]: E1002 20:21:44.724766 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:44.738280 kubelet[1549]: E1002 20:21:44.738209 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:44.873807 kubelet[1549]: E1002 20:21:44.873705 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:21:45.725991 kubelet[1549]: E1002 20:21:45.725860 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:46.726286 kubelet[1549]: E1002 20:21:46.726179 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:47.727473 kubelet[1549]: E1002 20:21:47.727368 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:48.727730 kubelet[1549]: E1002 20:21:48.727573 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:49.728318 kubelet[1549]: E1002 20:21:49.728209 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:49.739383 kubelet[1549]: E1002 20:21:49.739290 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:50.728761 kubelet[1549]: E1002 20:21:50.728622 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:51.729648 kubelet[1549]: E1002 20:21:51.729546 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:52.730621 kubelet[1549]: E1002 20:21:52.730522 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:53.731350 kubelet[1549]: E1002 20:21:53.731249 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:54.732041 kubelet[1549]: E1002 20:21:54.731939 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:54.741456 kubelet[1549]: E1002 20:21:54.741348 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:21:55.732206 kubelet[1549]: E1002 20:21:55.732100 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:55.873295 kubelet[1549]: E1002 20:21:55.873231 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:21:56.732789 kubelet[1549]: E1002 20:21:56.732677 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:57.733035 kubelet[1549]: E1002 20:21:57.732936 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:58.733675 kubelet[1549]: E1002 20:21:58.733577 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:59.675215 kubelet[1549]: E1002 20:21:59.675115 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:59.734255 kubelet[1549]: E1002 20:21:59.734146 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:21:59.742400 kubelet[1549]: E1002 20:21:59.742328 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:00.735044 kubelet[1549]: E1002 20:22:00.734967 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:01.736240 kubelet[1549]: E1002 20:22:01.736164 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:02.736776 kubelet[1549]: E1002 20:22:02.736674 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:03.737089 kubelet[1549]: E1002 20:22:03.736990 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:04.738213 kubelet[1549]: E1002 20:22:04.738103 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:04.743396 kubelet[1549]: E1002 20:22:04.743344 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:05.739012 kubelet[1549]: E1002 20:22:05.738936 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:06.739553 kubelet[1549]: E1002 20:22:06.739477 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:07.739997 kubelet[1549]: E1002 20:22:07.739894 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:08.740780 kubelet[1549]: E1002 20:22:08.740678 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:09.742032 kubelet[1549]: E1002 20:22:09.741928 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:09.744240 kubelet[1549]: E1002 20:22:09.744182 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:10.742949 kubelet[1549]: E1002 20:22:10.742842 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:10.873486 kubelet[1549]: E1002 20:22:10.873398 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:22:11.743792 kubelet[1549]: E1002 20:22:11.743688 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:12.744630 kubelet[1549]: E1002 20:22:12.744527 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:13.745681 kubelet[1549]: E1002 20:22:13.745575 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:14.745826 kubelet[1549]: E1002 20:22:14.745727 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:14.746658 kubelet[1549]: E1002 20:22:14.746018 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:15.746377 kubelet[1549]: E1002 20:22:15.746273 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:16.747498 kubelet[1549]: E1002 20:22:16.747374 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:17.748546 kubelet[1549]: E1002 20:22:17.748446 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:18.749782 kubelet[1549]: E1002 20:22:18.749678 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:19.675387 kubelet[1549]: E1002 20:22:19.675288 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:19.747233 kubelet[1549]: E1002 20:22:19.747121 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:19.750468 kubelet[1549]: E1002 20:22:19.750371 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:20.751209 kubelet[1549]: E1002 20:22:20.751109 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:21.751957 kubelet[1549]: E1002 20:22:21.751855 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:21.877881 env[1156]: time="2023-10-02T20:22:21.877764591Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 20:22:21.891679 env[1156]: time="2023-10-02T20:22:21.891592033Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\"" Oct 2 20:22:21.891993 env[1156]: time="2023-10-02T20:22:21.891965382Z" level=info msg="StartContainer for \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\"" Oct 2 20:22:21.914046 systemd[1]: Started cri-containerd-786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f.scope. Oct 2 20:22:21.919538 systemd[1]: cri-containerd-786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f.scope: Deactivated successfully. Oct 2 20:22:21.919791 systemd[1]: Stopped cri-containerd-786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f.scope. Oct 2 20:22:21.922376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f-rootfs.mount: Deactivated successfully. Oct 2 20:22:21.924704 env[1156]: time="2023-10-02T20:22:21.924640166Z" level=info msg="shim disconnected" id=786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f Oct 2 20:22:21.924704 env[1156]: time="2023-10-02T20:22:21.924677445Z" level=warning msg="cleaning up after shim disconnected" id=786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f namespace=k8s.io Oct 2 20:22:21.924704 env[1156]: time="2023-10-02T20:22:21.924686080Z" level=info msg="cleaning up dead shim" Oct 2 20:22:21.942853 env[1156]: time="2023-10-02T20:22:21.942780320Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:22:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2078 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:22:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:22:21.943115 env[1156]: time="2023-10-02T20:22:21.943027374Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:22:21.943289 env[1156]: time="2023-10-02T20:22:21.943223080Z" level=error msg="Failed to pipe stdout of container \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\"" error="reading from a closed fifo" Oct 2 20:22:21.943356 env[1156]: time="2023-10-02T20:22:21.943255123Z" level=error msg="Failed to pipe stderr of container \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\"" error="reading from a closed fifo" Oct 2 20:22:21.957367 env[1156]: time="2023-10-02T20:22:21.957234887Z" level=error msg="StartContainer for \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:22:21.957748 kubelet[1549]: E1002 20:22:21.957658 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f" Oct 2 20:22:21.957964 kubelet[1549]: E1002 20:22:21.957858 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:22:21.957964 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:22:21.957964 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:22:21.957964 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dkdqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:22:21.958370 kubelet[1549]: E1002 20:22:21.957943 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:22:22.135544 kubelet[1549]: I1002 20:22:22.135433 1549 scope.go:115] "RemoveContainer" containerID="0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539" Oct 2 20:22:22.136309 kubelet[1549]: I1002 20:22:22.136228 1549 scope.go:115] "RemoveContainer" containerID="0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539" Oct 2 20:22:22.138302 env[1156]: time="2023-10-02T20:22:22.138179969Z" level=info msg="RemoveContainer for \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\"" Oct 2 20:22:22.139076 env[1156]: time="2023-10-02T20:22:22.138954910Z" level=info msg="RemoveContainer for \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\"" Oct 2 20:22:22.140355 env[1156]: time="2023-10-02T20:22:22.140162647Z" level=error msg="RemoveContainer for \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\" failed" error="failed to set removing state for container \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\": container is already in removing state" Oct 2 20:22:22.142482 kubelet[1549]: E1002 20:22:22.142440 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\": container is already in removing state" containerID="0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539" Oct 2 20:22:22.142694 kubelet[1549]: E1002 20:22:22.142521 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539": container is already in removing state; Skipping pod "cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)" Oct 2 20:22:22.143592 kubelet[1549]: E1002 20:22:22.143511 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:22:22.144497 env[1156]: time="2023-10-02T20:22:22.144236406Z" level=info msg="RemoveContainer for \"0de97d651495bae56e2c322108aabb928bae6881782f4309aa3522258dcd8539\" returns successfully" Oct 2 20:22:22.752669 kubelet[1549]: E1002 20:22:22.752563 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:23.753440 kubelet[1549]: E1002 20:22:23.753310 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:24.749113 kubelet[1549]: E1002 20:22:24.749014 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:24.754593 kubelet[1549]: E1002 20:22:24.754514 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:25.031061 kubelet[1549]: W1002 20:22:25.030835 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice/cri-containerd-786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f.scope WatchSource:0}: task 786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f not found: not found Oct 2 20:22:25.755557 kubelet[1549]: E1002 20:22:25.755452 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:26.756811 kubelet[1549]: E1002 20:22:26.756708 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:27.757694 kubelet[1549]: E1002 20:22:27.757593 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:28.758816 kubelet[1549]: E1002 20:22:28.758712 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:29.749823 kubelet[1549]: E1002 20:22:29.749714 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:29.759545 kubelet[1549]: E1002 20:22:29.759457 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:30.760694 kubelet[1549]: E1002 20:22:30.760590 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:31.761445 kubelet[1549]: E1002 20:22:31.761327 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:32.762178 kubelet[1549]: E1002 20:22:32.762075 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:33.763138 kubelet[1549]: E1002 20:22:33.763036 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:33.873312 kubelet[1549]: E1002 20:22:33.873200 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:22:34.751536 kubelet[1549]: E1002 20:22:34.751428 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:34.764041 kubelet[1549]: E1002 20:22:34.763937 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:35.764297 kubelet[1549]: E1002 20:22:35.764198 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:36.765447 kubelet[1549]: E1002 20:22:36.765326 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:37.765736 kubelet[1549]: E1002 20:22:37.765629 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:38.766797 kubelet[1549]: E1002 20:22:38.766694 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:39.674650 kubelet[1549]: E1002 20:22:39.674544 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:39.752393 kubelet[1549]: E1002 20:22:39.752300 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:39.767885 kubelet[1549]: E1002 20:22:39.767780 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:40.768784 kubelet[1549]: E1002 20:22:40.768681 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:41.769220 kubelet[1549]: E1002 20:22:41.769147 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:42.769732 kubelet[1549]: E1002 20:22:42.769678 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:43.769942 kubelet[1549]: E1002 20:22:43.769873 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:44.754477 kubelet[1549]: E1002 20:22:44.754373 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:44.770113 kubelet[1549]: E1002 20:22:44.770014 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:44.874173 kubelet[1549]: E1002 20:22:44.874080 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:22:45.770691 kubelet[1549]: E1002 20:22:45.770586 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:46.770991 kubelet[1549]: E1002 20:22:46.770885 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:47.771293 kubelet[1549]: E1002 20:22:47.771184 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:48.771744 kubelet[1549]: E1002 20:22:48.771636 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:49.755272 kubelet[1549]: E1002 20:22:49.755176 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:49.771980 kubelet[1549]: E1002 20:22:49.771875 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:50.772340 kubelet[1549]: E1002 20:22:50.772235 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:51.773339 kubelet[1549]: E1002 20:22:51.773236 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:52.773634 kubelet[1549]: E1002 20:22:52.773530 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:53.774765 kubelet[1549]: E1002 20:22:53.774664 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:54.756832 kubelet[1549]: E1002 20:22:54.756734 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:54.775627 kubelet[1549]: E1002 20:22:54.775527 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:55.775910 kubelet[1549]: E1002 20:22:55.775794 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:56.777030 kubelet[1549]: E1002 20:22:56.776928 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:57.777586 kubelet[1549]: E1002 20:22:57.777481 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:57.873853 kubelet[1549]: E1002 20:22:57.873790 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:22:58.778455 kubelet[1549]: E1002 20:22:58.778348 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:59.675605 kubelet[1549]: E1002 20:22:59.675502 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:22:59.757950 kubelet[1549]: E1002 20:22:59.757847 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:22:59.778647 kubelet[1549]: E1002 20:22:59.778540 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:00.779006 kubelet[1549]: E1002 20:23:00.778893 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:01.779183 kubelet[1549]: E1002 20:23:01.779063 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:02.779915 kubelet[1549]: E1002 20:23:02.779811 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:03.780555 kubelet[1549]: E1002 20:23:03.780454 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:04.759111 kubelet[1549]: E1002 20:23:04.759006 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:04.780808 kubelet[1549]: E1002 20:23:04.780705 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:05.781636 kubelet[1549]: E1002 20:23:05.781528 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:06.782165 kubelet[1549]: E1002 20:23:06.782096 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:07.783309 kubelet[1549]: E1002 20:23:07.783204 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:08.784078 kubelet[1549]: E1002 20:23:08.783974 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:09.759901 kubelet[1549]: E1002 20:23:09.759813 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:09.784635 kubelet[1549]: E1002 20:23:09.784531 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:09.872638 kubelet[1549]: E1002 20:23:09.872574 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:23:10.784859 kubelet[1549]: E1002 20:23:10.784754 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:11.785808 kubelet[1549]: E1002 20:23:11.785705 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:12.786008 kubelet[1549]: E1002 20:23:12.785934 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:13.787215 kubelet[1549]: E1002 20:23:13.787102 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:14.761920 kubelet[1549]: E1002 20:23:14.761809 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:14.788053 kubelet[1549]: E1002 20:23:14.787944 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:15.789039 kubelet[1549]: E1002 20:23:15.788939 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:16.789752 kubelet[1549]: E1002 20:23:16.789649 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:17.790185 kubelet[1549]: E1002 20:23:17.790078 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:18.790440 kubelet[1549]: E1002 20:23:18.790220 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:19.674723 kubelet[1549]: E1002 20:23:19.674615 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:19.762739 kubelet[1549]: E1002 20:23:19.762649 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:19.790540 kubelet[1549]: E1002 20:23:19.790398 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:20.790782 kubelet[1549]: E1002 20:23:20.790675 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:20.873735 kubelet[1549]: E1002 20:23:20.873639 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:23:21.791559 kubelet[1549]: E1002 20:23:21.791457 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:22.792233 kubelet[1549]: E1002 20:23:22.792165 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:23.792443 kubelet[1549]: E1002 20:23:23.792322 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:24.764136 kubelet[1549]: E1002 20:23:24.764035 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:24.792709 kubelet[1549]: E1002 20:23:24.792602 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:25.793789 kubelet[1549]: E1002 20:23:25.793689 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:26.794939 kubelet[1549]: E1002 20:23:26.794838 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:27.795209 kubelet[1549]: E1002 20:23:27.795109 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:28.795627 kubelet[1549]: E1002 20:23:28.795524 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:29.765097 kubelet[1549]: E1002 20:23:29.764999 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:29.796450 kubelet[1549]: E1002 20:23:29.796348 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:30.797558 kubelet[1549]: E1002 20:23:30.797458 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:31.798286 kubelet[1549]: E1002 20:23:31.798170 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:31.874010 kubelet[1549]: E1002 20:23:31.873948 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:23:32.799249 kubelet[1549]: E1002 20:23:32.799146 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:33.799566 kubelet[1549]: E1002 20:23:33.799462 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:34.766312 kubelet[1549]: E1002 20:23:34.766215 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:34.800352 kubelet[1549]: E1002 20:23:34.800249 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:35.800800 kubelet[1549]: E1002 20:23:35.800697 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:36.801003 kubelet[1549]: E1002 20:23:36.800845 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:37.801173 kubelet[1549]: E1002 20:23:37.801036 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:38.801473 kubelet[1549]: E1002 20:23:38.801368 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:39.674688 kubelet[1549]: E1002 20:23:39.674583 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:39.767100 kubelet[1549]: E1002 20:23:39.767036 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:39.801760 kubelet[1549]: E1002 20:23:39.801661 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:40.802774 kubelet[1549]: E1002 20:23:40.802695 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:41.803781 kubelet[1549]: E1002 20:23:41.803705 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:42.804061 kubelet[1549]: E1002 20:23:42.803938 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:43.804504 kubelet[1549]: E1002 20:23:43.804423 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:43.879091 env[1156]: time="2023-10-02T20:23:43.879051059Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 20:23:43.883032 env[1156]: time="2023-10-02T20:23:43.883015211Z" level=info msg="CreateContainer within sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\"" Oct 2 20:23:43.883229 env[1156]: time="2023-10-02T20:23:43.883213190Z" level=info msg="StartContainer for \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\"" Oct 2 20:23:43.906623 systemd[1]: Started cri-containerd-637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283.scope. Oct 2 20:23:43.912119 systemd[1]: cri-containerd-637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283.scope: Deactivated successfully. Oct 2 20:23:43.912268 systemd[1]: Stopped cri-containerd-637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283.scope. Oct 2 20:23:43.914064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283-rootfs.mount: Deactivated successfully. Oct 2 20:23:43.916445 env[1156]: time="2023-10-02T20:23:43.916389624Z" level=info msg="shim disconnected" id=637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283 Oct 2 20:23:43.916445 env[1156]: time="2023-10-02T20:23:43.916425740Z" level=warning msg="cleaning up after shim disconnected" id=637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283 namespace=k8s.io Oct 2 20:23:43.916445 env[1156]: time="2023-10-02T20:23:43.916432108Z" level=info msg="cleaning up dead shim" Oct 2 20:23:43.932465 env[1156]: time="2023-10-02T20:23:43.932391740Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:23:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2122 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:23:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:23:43.932625 env[1156]: time="2023-10-02T20:23:43.932564288Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:23:43.932758 env[1156]: time="2023-10-02T20:23:43.932718287Z" level=error msg="Failed to pipe stdout of container \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\"" error="reading from a closed fifo" Oct 2 20:23:43.932758 env[1156]: time="2023-10-02T20:23:43.932717633Z" level=error msg="Failed to pipe stderr of container \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\"" error="reading from a closed fifo" Oct 2 20:23:43.933446 env[1156]: time="2023-10-02T20:23:43.933417246Z" level=error msg="StartContainer for \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:23:43.933634 kubelet[1549]: E1002 20:23:43.933590 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283" Oct 2 20:23:43.933710 kubelet[1549]: E1002 20:23:43.933670 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:23:43.933710 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:23:43.933710 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:23:43.933710 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dkdqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:23:43.933860 kubelet[1549]: E1002 20:23:43.933720 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:23:44.332064 kubelet[1549]: I1002 20:23:44.331973 1549 scope.go:115] "RemoveContainer" containerID="786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f" Oct 2 20:23:44.332686 kubelet[1549]: I1002 20:23:44.332639 1549 scope.go:115] "RemoveContainer" containerID="786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f" Oct 2 20:23:44.334594 env[1156]: time="2023-10-02T20:23:44.334501759Z" level=info msg="RemoveContainer for \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\"" Oct 2 20:23:44.335396 env[1156]: time="2023-10-02T20:23:44.335310730Z" level=info msg="RemoveContainer for \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\"" Oct 2 20:23:44.335660 env[1156]: time="2023-10-02T20:23:44.335569042Z" level=error msg="RemoveContainer for \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\" failed" error="failed to set removing state for container \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\": container is already in removing state" Oct 2 20:23:44.335926 kubelet[1549]: E1002 20:23:44.335883 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\": container is already in removing state" containerID="786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f" Oct 2 20:23:44.336159 kubelet[1549]: E1002 20:23:44.335958 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f": container is already in removing state; Skipping pod "cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)" Oct 2 20:23:44.336655 kubelet[1549]: E1002 20:23:44.336578 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-slp8v_kube-system(88c1dbe0-b003-4872-b7fa-d3b14f1b6fde)\"" pod="kube-system/cilium-slp8v" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde Oct 2 20:23:44.338016 env[1156]: time="2023-10-02T20:23:44.337873149Z" level=info msg="RemoveContainer for \"786bb3087b9b9cbed0f1353751f344c10c9db11f3679d37fa650fba0d8a8660f\" returns successfully" Oct 2 20:23:44.769261 kubelet[1549]: E1002 20:23:44.769041 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:44.805562 kubelet[1549]: E1002 20:23:44.805455 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:45.806421 kubelet[1549]: E1002 20:23:45.806298 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:46.807382 kubelet[1549]: E1002 20:23:46.807276 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:47.024588 kubelet[1549]: W1002 20:23:47.024469 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice/cri-containerd-637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283.scope WatchSource:0}: task 637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283 not found: not found Oct 2 20:23:47.808174 kubelet[1549]: E1002 20:23:47.808066 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:48.808669 kubelet[1549]: E1002 20:23:48.808561 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:49.770072 kubelet[1549]: E1002 20:23:49.769974 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:49.808803 kubelet[1549]: E1002 20:23:49.808726 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:50.809901 kubelet[1549]: E1002 20:23:50.809832 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:50.866479 env[1156]: time="2023-10-02T20:23:50.866359034Z" level=info msg="StopPodSandbox for \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\"" Oct 2 20:23:50.867365 env[1156]: time="2023-10-02T20:23:50.866547160Z" level=info msg="Container to stop \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:23:50.870732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37-shm.mount: Deactivated successfully. Oct 2 20:23:50.886770 systemd[1]: cri-containerd-13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37.scope: Deactivated successfully. Oct 2 20:23:50.886000 audit: BPF prog-id=60 op=UNLOAD Oct 2 20:23:50.912596 kernel: kauditd_printk_skb: 286 callbacks suppressed Oct 2 20:23:50.912646 kernel: audit: type=1334 audit(1696278230.886:619): prog-id=60 op=UNLOAD Oct 2 20:23:50.941000 audit: BPF prog-id=66 op=UNLOAD Oct 2 20:23:50.943396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37-rootfs.mount: Deactivated successfully. Oct 2 20:23:50.969476 kernel: audit: type=1334 audit(1696278230.941:620): prog-id=66 op=UNLOAD Oct 2 20:23:50.996914 env[1156]: time="2023-10-02T20:23:50.996835743Z" level=info msg="shim disconnected" id=13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37 Oct 2 20:23:50.996914 env[1156]: time="2023-10-02T20:23:50.996902012Z" level=warning msg="cleaning up after shim disconnected" id=13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37 namespace=k8s.io Oct 2 20:23:50.996914 env[1156]: time="2023-10-02T20:23:50.996908576Z" level=info msg="cleaning up dead shim" Oct 2 20:23:51.012303 env[1156]: time="2023-10-02T20:23:51.012246361Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:23:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2153 runtime=io.containerd.runc.v2\n" Oct 2 20:23:51.012479 env[1156]: time="2023-10-02T20:23:51.012422060Z" level=info msg="TearDown network for sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" successfully" Oct 2 20:23:51.012479 env[1156]: time="2023-10-02T20:23:51.012436442Z" level=info msg="StopPodSandbox for \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" returns successfully" Oct 2 20:23:51.130563 kubelet[1549]: I1002 20:23:51.130498 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-kernel\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.130970 kubelet[1549]: I1002 20:23:51.130589 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-run\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.130970 kubelet[1549]: I1002 20:23:51.130607 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.130970 kubelet[1549]: I1002 20:23:51.130648 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-etc-cni-netd\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.130970 kubelet[1549]: I1002 20:23:51.130699 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.130970 kubelet[1549]: I1002 20:23:51.130723 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.131809 kubelet[1549]: I1002 20:23:51.130777 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-cgroup\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.131809 kubelet[1549]: I1002 20:23:51.130839 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-xtables-lock\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.131809 kubelet[1549]: I1002 20:23:51.130895 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cni-path\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.131809 kubelet[1549]: I1002 20:23:51.130869 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.131809 kubelet[1549]: I1002 20:23:51.130912 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.131809 kubelet[1549]: I1002 20:23:51.130951 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-net\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.132633 kubelet[1549]: I1002 20:23:51.131003 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-bpf-maps\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.132633 kubelet[1549]: I1002 20:23:51.131004 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cni-path" (OuterVolumeSpecName: "cni-path") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.132633 kubelet[1549]: I1002 20:23:51.131074 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-config-path\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.132633 kubelet[1549]: I1002 20:23:51.131096 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.132633 kubelet[1549]: I1002 20:23:51.131130 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-lib-modules\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.133220 kubelet[1549]: I1002 20:23:51.131073 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.133220 kubelet[1549]: I1002 20:23:51.131181 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hostproc\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.133220 kubelet[1549]: I1002 20:23:51.131214 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.133220 kubelet[1549]: I1002 20:23:51.131243 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hubble-tls\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.133220 kubelet[1549]: I1002 20:23:51.131269 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hostproc" (OuterVolumeSpecName: "hostproc") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:51.133820 kubelet[1549]: I1002 20:23:51.131307 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkdqg\" (UniqueName: \"kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-kube-api-access-dkdqg\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.133820 kubelet[1549]: I1002 20:23:51.131376 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-clustermesh-secrets\") pod \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\" (UID: \"88c1dbe0-b003-4872-b7fa-d3b14f1b6fde\") " Oct 2 20:23:51.133820 kubelet[1549]: I1002 20:23:51.131470 1549 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-etc-cni-netd\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.133820 kubelet[1549]: W1002 20:23:51.131452 1549 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:23:51.133820 kubelet[1549]: I1002 20:23:51.131507 1549 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-cgroup\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.133820 kubelet[1549]: I1002 20:23:51.131540 1549 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-xtables-lock\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.133820 kubelet[1549]: I1002 20:23:51.131570 1549 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cni-path\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.133820 kubelet[1549]: I1002 20:23:51.131602 1549 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-kernel\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.134678 kubelet[1549]: I1002 20:23:51.131634 1549 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-run\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.134678 kubelet[1549]: I1002 20:23:51.131664 1549 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-host-proc-sys-net\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.134678 kubelet[1549]: I1002 20:23:51.131693 1549 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-bpf-maps\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.134678 kubelet[1549]: I1002 20:23:51.131721 1549 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-lib-modules\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.134678 kubelet[1549]: I1002 20:23:51.131750 1549 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hostproc\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.136094 kubelet[1549]: I1002 20:23:51.136056 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:23:51.136433 kubelet[1549]: I1002 20:23:51.136420 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:23:51.136515 kubelet[1549]: I1002 20:23:51.136434 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-kube-api-access-dkdqg" (OuterVolumeSpecName: "kube-api-access-dkdqg") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "kube-api-access-dkdqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:23:51.136539 kubelet[1549]: I1002 20:23:51.136522 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" (UID: "88c1dbe0-b003-4872-b7fa-d3b14f1b6fde"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:23:51.137001 systemd[1]: var-lib-kubelet-pods-88c1dbe0\x2db003\x2d4872\x2db7fa\x2dd3b14f1b6fde-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddkdqg.mount: Deactivated successfully. Oct 2 20:23:51.137054 systemd[1]: var-lib-kubelet-pods-88c1dbe0\x2db003\x2d4872\x2db7fa\x2dd3b14f1b6fde-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:23:51.137087 systemd[1]: var-lib-kubelet-pods-88c1dbe0\x2db003\x2d4872\x2db7fa\x2dd3b14f1b6fde-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:23:51.232165 kubelet[1549]: I1002 20:23:51.232051 1549 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-hubble-tls\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.232165 kubelet[1549]: I1002 20:23:51.232133 1549 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-clustermesh-secrets\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.232165 kubelet[1549]: I1002 20:23:51.232170 1549 reconciler.go:399] "Volume detached for volume \"kube-api-access-dkdqg\" (UniqueName: \"kubernetes.io/projected/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-kube-api-access-dkdqg\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.232816 kubelet[1549]: I1002 20:23:51.232201 1549 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde-cilium-config-path\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:51.355045 kubelet[1549]: I1002 20:23:51.354988 1549 scope.go:115] "RemoveContainer" containerID="637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283" Oct 2 20:23:51.357450 env[1156]: time="2023-10-02T20:23:51.357352656Z" level=info msg="RemoveContainer for \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\"" Oct 2 20:23:51.360343 env[1156]: time="2023-10-02T20:23:51.360331025Z" level=info msg="RemoveContainer for \"637c5f1cd74fb26f3cdfab86de584fb8752d768a59962f7648c583ebac251283\" returns successfully" Oct 2 20:23:51.361361 systemd[1]: Removed slice kubepods-burstable-pod88c1dbe0_b003_4872_b7fa_d3b14f1b6fde.slice. Oct 2 20:23:51.403038 kubelet[1549]: I1002 20:23:51.402829 1549 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:23:51.403038 kubelet[1549]: E1002 20:23:51.402941 1549 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403038 kubelet[1549]: E1002 20:23:51.402988 1549 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403038 kubelet[1549]: E1002 20:23:51.403023 1549 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: E1002 20:23:51.403055 1549 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: I1002 20:23:51.403126 1549 memory_manager.go:345] "RemoveStaleState removing state" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: I1002 20:23:51.403160 1549 memory_manager.go:345] "RemoveStaleState removing state" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: I1002 20:23:51.403190 1549 memory_manager.go:345] "RemoveStaleState removing state" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: I1002 20:23:51.403221 1549 memory_manager.go:345] "RemoveStaleState removing state" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: E1002 20:23:51.403286 1549 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: E1002 20:23:51.403321 1549 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: I1002 20:23:51.403380 1549 memory_manager.go:345] "RemoveStaleState removing state" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.403970 kubelet[1549]: I1002 20:23:51.403430 1549 memory_manager.go:345] "RemoveStaleState removing state" podUID="88c1dbe0-b003-4872-b7fa-d3b14f1b6fde" containerName="mount-cgroup" Oct 2 20:23:51.418009 systemd[1]: Created slice kubepods-burstable-podf8faacef_d27b_4eae_ab0a_ae0ae4a80f85.slice. Oct 2 20:23:51.534483 kubelet[1549]: I1002 20:23:51.534371 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cni-path\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.534755 kubelet[1549]: I1002 20:23:51.534582 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-etc-cni-netd\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.534755 kubelet[1549]: I1002 20:23:51.534683 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-lib-modules\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.534755 kubelet[1549]: I1002 20:23:51.534747 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-run\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.535376 kubelet[1549]: I1002 20:23:51.534809 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-xtables-lock\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.535376 kubelet[1549]: I1002 20:23:51.534930 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-clustermesh-secrets\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.535376 kubelet[1549]: I1002 20:23:51.535149 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-kernel\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.535376 kubelet[1549]: I1002 20:23:51.535312 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hostproc\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.536133 kubelet[1549]: I1002 20:23:51.535451 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-bpf-maps\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.536133 kubelet[1549]: I1002 20:23:51.535550 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-cgroup\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.536133 kubelet[1549]: I1002 20:23:51.535616 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-config-path\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.536133 kubelet[1549]: I1002 20:23:51.535771 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-net\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.536133 kubelet[1549]: I1002 20:23:51.535893 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hubble-tls\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.536133 kubelet[1549]: I1002 20:23:51.536015 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8k66\" (UniqueName: \"kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-kube-api-access-v8k66\") pod \"cilium-b7l2r\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " pod="kube-system/cilium-b7l2r" Oct 2 20:23:51.732315 env[1156]: time="2023-10-02T20:23:51.732181116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7l2r,Uid:f8faacef-d27b-4eae-ab0a-ae0ae4a80f85,Namespace:kube-system,Attempt:0,}" Oct 2 20:23:51.744628 env[1156]: time="2023-10-02T20:23:51.744499472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:23:51.744628 env[1156]: time="2023-10-02T20:23:51.744566966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:23:51.744628 env[1156]: time="2023-10-02T20:23:51.744591132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:23:51.744933 env[1156]: time="2023-10-02T20:23:51.744858630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02 pid=2178 runtime=io.containerd.runc.v2 Oct 2 20:23:51.772408 systemd[1]: Started cri-containerd-512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02.scope. Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.810413 kubelet[1549]: E1002 20:23:51.810395 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.872932 env[1156]: time="2023-10-02T20:23:51.872909105Z" level=info msg="StopPodSandbox for \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\"" Oct 2 20:23:51.873063 env[1156]: time="2023-10-02T20:23:51.872966297Z" level=info msg="TearDown network for sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" successfully" Oct 2 20:23:51.873063 env[1156]: time="2023-10-02T20:23:51.872993263Z" level=info msg="StopPodSandbox for \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" returns successfully" Oct 2 20:23:51.873334 kubelet[1549]: I1002 20:23:51.873327 1549 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=88c1dbe0-b003-4872-b7fa-d3b14f1b6fde path="/var/lib/kubelet/pods/88c1dbe0-b003-4872-b7fa-d3b14f1b6fde/volumes" Oct 2 20:23:51.894704 kernel: audit: type=1400 audit(1696278231.777:621): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894740 kernel: audit: type=1400 audit(1696278231.777:622): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894756 kernel: audit: type=1400 audit(1696278231.777:623): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.952827 kernel: audit: type=1400 audit(1696278231.777:624): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.011272 kernel: audit: type=1400 audit(1696278231.777:625): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.069816 kernel: audit: type=1400 audit(1696278231.777:626): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.128284 kernel: audit: type=1400 audit(1696278231.777:627): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.186839 kernel: audit: type=1400 audit(1696278231.777:628): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit: BPF prog-id=70 op=LOAD Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2178 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:51.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531326436306133313666623666323465616130656133303931306236 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2178 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:51.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531326436306133313666623666323465616130656133303931306236 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:51.894000 audit: BPF prog-id=71 op=LOAD Oct 2 20:23:51.894000 audit[2188]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000092ba0 items=0 ppid=2178 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:51.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531326436306133313666623666323465616130656133303931306236 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.010000 audit: BPF prog-id=72 op=LOAD Oct 2 20:23:52.010000 audit[2188]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000092be8 items=0 ppid=2178 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:52.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531326436306133313666623666323465616130656133303931306236 Oct 2 20:23:52.127000 audit: BPF prog-id=72 op=UNLOAD Oct 2 20:23:52.127000 audit: BPF prog-id=71 op=UNLOAD Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { perfmon } for pid=2188 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit[2188]: AVC avc: denied { bpf } for pid=2188 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:52.127000 audit: BPF prog-id=73 op=LOAD Oct 2 20:23:52.127000 audit[2188]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000092ff8 items=0 ppid=2178 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:52.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531326436306133313666623666323465616130656133303931306236 Oct 2 20:23:52.263302 env[1156]: time="2023-10-02T20:23:52.263254324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7l2r,Uid:f8faacef-d27b-4eae-ab0a-ae0ae4a80f85,Namespace:kube-system,Attempt:0,} returns sandbox id \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\"" Oct 2 20:23:52.264329 env[1156]: time="2023-10-02T20:23:52.264315336Z" level=info msg="CreateContainer within sandbox \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:23:52.268631 env[1156]: time="2023-10-02T20:23:52.268507814Z" level=info msg="CreateContainer within sandbox \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\"" Oct 2 20:23:52.268947 env[1156]: time="2023-10-02T20:23:52.268908482Z" level=info msg="StartContainer for \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\"" Oct 2 20:23:52.289997 systemd[1]: Started cri-containerd-65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279.scope. Oct 2 20:23:52.294642 systemd[1]: cri-containerd-65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279.scope: Deactivated successfully. Oct 2 20:23:52.294789 systemd[1]: Stopped cri-containerd-65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279.scope. Oct 2 20:23:52.301319 env[1156]: time="2023-10-02T20:23:52.301291521Z" level=info msg="shim disconnected" id=65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279 Oct 2 20:23:52.301392 env[1156]: time="2023-10-02T20:23:52.301321338Z" level=warning msg="cleaning up after shim disconnected" id=65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279 namespace=k8s.io Oct 2 20:23:52.301392 env[1156]: time="2023-10-02T20:23:52.301328853Z" level=info msg="cleaning up dead shim" Oct 2 20:23:52.317340 env[1156]: time="2023-10-02T20:23:52.317288702Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:23:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2234 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:23:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:23:52.317514 env[1156]: time="2023-10-02T20:23:52.317451278Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" Oct 2 20:23:52.317636 env[1156]: time="2023-10-02T20:23:52.317582836Z" level=error msg="Failed to pipe stderr of container \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\"" error="reading from a closed fifo" Oct 2 20:23:52.317636 env[1156]: time="2023-10-02T20:23:52.317575659Z" level=error msg="Failed to pipe stdout of container \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\"" error="reading from a closed fifo" Oct 2 20:23:52.318113 env[1156]: time="2023-10-02T20:23:52.318058622Z" level=error msg="StartContainer for \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:23:52.318237 kubelet[1549]: E1002 20:23:52.318180 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279" Oct 2 20:23:52.318396 kubelet[1549]: E1002 20:23:52.318256 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:23:52.318396 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:23:52.318396 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:23:52.318396 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v8k66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-b7l2r_kube-system(f8faacef-d27b-4eae-ab0a-ae0ae4a80f85): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:23:52.318554 kubelet[1549]: E1002 20:23:52.318284 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-b7l2r" podUID=f8faacef-d27b-4eae-ab0a-ae0ae4a80f85 Oct 2 20:23:52.363113 env[1156]: time="2023-10-02T20:23:52.362988724Z" level=info msg="StopPodSandbox for \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\"" Oct 2 20:23:52.363439 env[1156]: time="2023-10-02T20:23:52.363114933Z" level=info msg="Container to stop \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:23:52.377322 systemd[1]: cri-containerd-512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02.scope: Deactivated successfully. Oct 2 20:23:52.377000 audit: BPF prog-id=70 op=UNLOAD Oct 2 20:23:52.392000 audit: BPF prog-id=73 op=UNLOAD Oct 2 20:23:52.430191 env[1156]: time="2023-10-02T20:23:52.430033692Z" level=info msg="shim disconnected" id=512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02 Oct 2 20:23:52.430191 env[1156]: time="2023-10-02T20:23:52.430151719Z" level=warning msg="cleaning up after shim disconnected" id=512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02 namespace=k8s.io Oct 2 20:23:52.430191 env[1156]: time="2023-10-02T20:23:52.430182212Z" level=info msg="cleaning up dead shim" Oct 2 20:23:52.458558 env[1156]: time="2023-10-02T20:23:52.458398967Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:23:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2266 runtime=io.containerd.runc.v2\n" Oct 2 20:23:52.459206 env[1156]: time="2023-10-02T20:23:52.459096206Z" level=info msg="TearDown network for sandbox \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" successfully" Oct 2 20:23:52.459206 env[1156]: time="2023-10-02T20:23:52.459157088Z" level=info msg="StopPodSandbox for \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" returns successfully" Oct 2 20:23:52.644034 kubelet[1549]: I1002 20:23:52.643922 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cni-path\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.644441 kubelet[1549]: I1002 20:23:52.644060 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-xtables-lock\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.644441 kubelet[1549]: I1002 20:23:52.644049 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cni-path" (OuterVolumeSpecName: "cni-path") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.644441 kubelet[1549]: I1002 20:23:52.644174 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-clustermesh-secrets\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.644441 kubelet[1549]: I1002 20:23:52.644285 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-net\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.644441 kubelet[1549]: I1002 20:23:52.644195 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.644441 kubelet[1549]: I1002 20:23:52.644385 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-kernel\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.645147 kubelet[1549]: I1002 20:23:52.644382 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.645147 kubelet[1549]: I1002 20:23:52.644505 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hostproc\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.645147 kubelet[1549]: I1002 20:23:52.644492 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.645147 kubelet[1549]: I1002 20:23:52.644592 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-bpf-maps\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.645147 kubelet[1549]: I1002 20:23:52.644620 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hostproc" (OuterVolumeSpecName: "hostproc") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.645722 kubelet[1549]: I1002 20:23:52.644663 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.645722 kubelet[1549]: I1002 20:23:52.644713 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-config-path\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.645722 kubelet[1549]: I1002 20:23:52.644803 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-lib-modules\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.645722 kubelet[1549]: I1002 20:23:52.644900 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-run\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.645722 kubelet[1549]: I1002 20:23:52.644885 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.645722 kubelet[1549]: I1002 20:23:52.645002 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-etc-cni-netd\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.646338 kubelet[1549]: I1002 20:23:52.644995 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.646338 kubelet[1549]: I1002 20:23:52.645118 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8k66\" (UniqueName: \"kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-kube-api-access-v8k66\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.646338 kubelet[1549]: I1002 20:23:52.645109 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.646338 kubelet[1549]: I1002 20:23:52.645219 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-cgroup\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.646338 kubelet[1549]: W1002 20:23:52.645215 1549 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:23:52.646338 kubelet[1549]: I1002 20:23:52.645306 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645341 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hubble-tls\") pod \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\" (UID: \"f8faacef-d27b-4eae-ab0a-ae0ae4a80f85\") " Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645467 1549 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-etc-cni-netd\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645529 1549 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-cgroup\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645585 1549 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-net\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645640 1549 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cni-path\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645698 1549 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-xtables-lock\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645759 1549 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-lib-modules\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.646959 kubelet[1549]: I1002 20:23:52.645815 1549 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-run\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.647801 kubelet[1549]: I1002 20:23:52.645876 1549 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-host-proc-sys-kernel\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.647801 kubelet[1549]: I1002 20:23:52.645932 1549 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hostproc\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.647801 kubelet[1549]: I1002 20:23:52.645987 1549 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-bpf-maps\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.648529 kubelet[1549]: I1002 20:23:52.648448 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:23:52.648775 kubelet[1549]: I1002 20:23:52.648729 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:23:52.648813 kubelet[1549]: I1002 20:23:52.648792 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-kube-api-access-v8k66" (OuterVolumeSpecName: "kube-api-access-v8k66") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "kube-api-access-v8k66". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:23:52.648923 kubelet[1549]: I1002 20:23:52.648883 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" (UID: "f8faacef-d27b-4eae-ab0a-ae0ae4a80f85"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:23:52.746591 kubelet[1549]: I1002 20:23:52.746479 1549 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-clustermesh-secrets\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.746591 kubelet[1549]: I1002 20:23:52.746558 1549 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-cilium-config-path\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.746591 kubelet[1549]: I1002 20:23:52.746599 1549 reconciler.go:399] "Volume detached for volume \"kube-api-access-v8k66\" (UniqueName: \"kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-kube-api-access-v8k66\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.747135 kubelet[1549]: I1002 20:23:52.746634 1549 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85-hubble-tls\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:23:52.811343 kubelet[1549]: E1002 20:23:52.811235 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:52.898697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279-rootfs.mount: Deactivated successfully. Oct 2 20:23:52.898952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02-rootfs.mount: Deactivated successfully. Oct 2 20:23:52.899134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02-shm.mount: Deactivated successfully. Oct 2 20:23:52.899310 systemd[1]: var-lib-kubelet-pods-f8faacef\x2dd27b\x2d4eae\x2dab0a\x2dae0ae4a80f85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8k66.mount: Deactivated successfully. Oct 2 20:23:52.899511 systemd[1]: var-lib-kubelet-pods-f8faacef\x2dd27b\x2d4eae\x2dab0a\x2dae0ae4a80f85-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:23:52.899688 systemd[1]: var-lib-kubelet-pods-f8faacef\x2dd27b\x2d4eae\x2dab0a\x2dae0ae4a80f85-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:23:53.370437 kubelet[1549]: I1002 20:23:53.370370 1549 scope.go:115] "RemoveContainer" containerID="65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279" Oct 2 20:23:53.371151 env[1156]: time="2023-10-02T20:23:53.371117069Z" level=info msg="RemoveContainer for \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\"" Oct 2 20:23:53.372311 env[1156]: time="2023-10-02T20:23:53.372270954Z" level=info msg="RemoveContainer for \"65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279\" returns successfully" Oct 2 20:23:53.372649 systemd[1]: Removed slice kubepods-burstable-podf8faacef_d27b_4eae_ab0a_ae0ae4a80f85.slice. Oct 2 20:23:53.812349 kubelet[1549]: E1002 20:23:53.812133 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:53.878560 kubelet[1549]: I1002 20:23:53.878496 1549 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f8faacef-d27b-4eae-ab0a-ae0ae4a80f85 path="/var/lib/kubelet/pods/f8faacef-d27b-4eae-ab0a-ae0ae4a80f85/volumes" Oct 2 20:23:54.756459 kubelet[1549]: I1002 20:23:54.756359 1549 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:23:54.756866 kubelet[1549]: E1002 20:23:54.756498 1549 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" containerName="mount-cgroup" Oct 2 20:23:54.756866 kubelet[1549]: I1002 20:23:54.756568 1549 memory_manager.go:345] "RemoveStaleState removing state" podUID="f8faacef-d27b-4eae-ab0a-ae0ae4a80f85" containerName="mount-cgroup" Oct 2 20:23:54.757260 kubelet[1549]: I1002 20:23:54.757122 1549 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:23:54.771346 kubelet[1549]: E1002 20:23:54.771290 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:54.772352 systemd[1]: Created slice kubepods-besteffort-podf12c4527_d3ac_4425_a515_0f53d39daccf.slice. Oct 2 20:23:54.782352 systemd[1]: Created slice kubepods-burstable-poda9c4a188_e695_4d81_baf6_9b5d853a5d88.slice. Oct 2 20:23:54.813378 kubelet[1549]: E1002 20:23:54.813271 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:54.859700 kubelet[1549]: I1002 20:23:54.859593 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-cgroup\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.859700 kubelet[1549]: I1002 20:23:54.859707 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-etc-cni-netd\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860139 kubelet[1549]: I1002 20:23:54.859781 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-clustermesh-secrets\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860139 kubelet[1549]: I1002 20:23:54.859936 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-config-path\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860139 kubelet[1549]: I1002 20:23:54.860006 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hubble-tls\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860478 kubelet[1549]: I1002 20:23:54.860196 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdr2p\" (UniqueName: \"kubernetes.io/projected/f12c4527-d3ac-4425-a515-0f53d39daccf-kube-api-access-jdr2p\") pod \"cilium-operator-69b677f97c-jsrpl\" (UID: \"f12c4527-d3ac-4425-a515-0f53d39daccf\") " pod="kube-system/cilium-operator-69b677f97c-jsrpl" Oct 2 20:23:54.860478 kubelet[1549]: I1002 20:23:54.860345 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-ipsec-secrets\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860736 kubelet[1549]: I1002 20:23:54.860484 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb9n5\" (UniqueName: \"kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-kube-api-access-bb9n5\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860736 kubelet[1549]: I1002 20:23:54.860660 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cni-path\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860940 kubelet[1549]: I1002 20:23:54.860773 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-lib-modules\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860940 kubelet[1549]: I1002 20:23:54.860836 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-xtables-lock\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.860940 kubelet[1549]: I1002 20:23:54.860907 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-kernel\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.861256 kubelet[1549]: I1002 20:23:54.861009 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f12c4527-d3ac-4425-a515-0f53d39daccf-cilium-config-path\") pod \"cilium-operator-69b677f97c-jsrpl\" (UID: \"f12c4527-d3ac-4425-a515-0f53d39daccf\") " pod="kube-system/cilium-operator-69b677f97c-jsrpl" Oct 2 20:23:54.861256 kubelet[1549]: I1002 20:23:54.861090 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-run\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.861256 kubelet[1549]: I1002 20:23:54.861152 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-bpf-maps\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.861588 kubelet[1549]: I1002 20:23:54.861252 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hostproc\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:54.861588 kubelet[1549]: I1002 20:23:54.861338 1549 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-net\") pod \"cilium-8tk78\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " pod="kube-system/cilium-8tk78" Oct 2 20:23:55.080173 env[1156]: time="2023-10-02T20:23:55.080054223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-jsrpl,Uid:f12c4527-d3ac-4425-a515-0f53d39daccf,Namespace:kube-system,Attempt:0,}" Oct 2 20:23:55.099599 env[1156]: time="2023-10-02T20:23:55.099474629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tk78,Uid:a9c4a188-e695-4d81-baf6-9b5d853a5d88,Namespace:kube-system,Attempt:0,}" Oct 2 20:23:55.113080 env[1156]: time="2023-10-02T20:23:55.113029454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:23:55.113080 env[1156]: time="2023-10-02T20:23:55.113066159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:23:55.113080 env[1156]: time="2023-10-02T20:23:55.113073118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:23:55.113211 env[1156]: time="2023-10-02T20:23:55.113156688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a pid=2297 runtime=io.containerd.runc.v2 Oct 2 20:23:55.113581 env[1156]: time="2023-10-02T20:23:55.113511703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:23:55.113581 env[1156]: time="2023-10-02T20:23:55.113543761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:23:55.113581 env[1156]: time="2023-10-02T20:23:55.113550690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:23:55.113697 env[1156]: time="2023-10-02T20:23:55.113641030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f pid=2305 runtime=io.containerd.runc.v2 Oct 2 20:23:55.119156 systemd[1]: Started cri-containerd-a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a.scope. Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit: BPF prog-id=74 op=LOAD Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001b1c48 a2=10 a3=1c items=0 ppid=2297 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366236636331303864313138383730636632393530613862663465 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001b16b0 a2=3c a3=c items=0 ppid=2297 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366236636331303864313138383730636632393530613862663465 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit: BPF prog-id=75 op=LOAD Oct 2 20:23:55.124000 audit[2318]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b19d8 a2=78 a3=c0002a4c90 items=0 ppid=2297 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366236636331303864313138383730636632393530613862663465 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit: BPF prog-id=76 op=LOAD Oct 2 20:23:55.124000 audit[2318]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b1770 a2=78 a3=c0002a4cd8 items=0 ppid=2297 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366236636331303864313138383730636632393530613862663465 Oct 2 20:23:55.124000 audit: BPF prog-id=76 op=UNLOAD Oct 2 20:23:55.124000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { perfmon } for pid=2318 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit[2318]: AVC avc: denied { bpf } for pid=2318 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.124000 audit: BPF prog-id=77 op=LOAD Oct 2 20:23:55.124000 audit[2318]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1c30 a2=78 a3=c0002a50e8 items=0 ppid=2297 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366236636331303864313138383730636632393530613862663465 Oct 2 20:23:55.130040 systemd[1]: Started cri-containerd-a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f.scope. Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.134000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit: BPF prog-id=78 op=LOAD Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2305 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136306634393638393434326233303062643432396537376637633133 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2305 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136306634393638393434326233303062643432396537376637633133 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit: BPF prog-id=79 op=LOAD Oct 2 20:23:55.135000 audit[2320]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0000a0e20 items=0 ppid=2305 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136306634393638393434326233303062643432396537376637633133 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit: BPF prog-id=80 op=LOAD Oct 2 20:23:55.135000 audit[2320]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0000a0e68 items=0 ppid=2305 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136306634393638393434326233303062643432396537376637633133 Oct 2 20:23:55.135000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:23:55.135000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { perfmon } for pid=2320 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit[2320]: AVC avc: denied { bpf } for pid=2320 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:55.135000 audit: BPF prog-id=81 op=LOAD Oct 2 20:23:55.135000 audit[2320]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0000a1278 items=0 ppid=2305 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:55.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136306634393638393434326233303062643432396537376637633133 Oct 2 20:23:55.141307 env[1156]: time="2023-10-02T20:23:55.141248915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tk78,Uid:a9c4a188-e695-4d81-baf6-9b5d853a5d88,Namespace:kube-system,Attempt:0,} returns sandbox id \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\"" Oct 2 20:23:55.142451 env[1156]: time="2023-10-02T20:23:55.142432632Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:23:55.146670 env[1156]: time="2023-10-02T20:23:55.146655154Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\"" Oct 2 20:23:55.146816 env[1156]: time="2023-10-02T20:23:55.146804322Z" level=info msg="StartContainer for \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\"" Oct 2 20:23:55.153606 env[1156]: time="2023-10-02T20:23:55.153583949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-jsrpl,Uid:f12c4527-d3ac-4425-a515-0f53d39daccf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a\"" Oct 2 20:23:55.154242 env[1156]: time="2023-10-02T20:23:55.154200803Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 20:23:55.165484 systemd[1]: Started cri-containerd-ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81.scope. Oct 2 20:23:55.170808 systemd[1]: cri-containerd-ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81.scope: Deactivated successfully. Oct 2 20:23:55.170960 systemd[1]: Stopped cri-containerd-ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81.scope. Oct 2 20:23:55.179544 env[1156]: time="2023-10-02T20:23:55.179485286Z" level=info msg="shim disconnected" id=ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81 Oct 2 20:23:55.179663 env[1156]: time="2023-10-02T20:23:55.179548274Z" level=warning msg="cleaning up after shim disconnected" id=ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81 namespace=k8s.io Oct 2 20:23:55.179663 env[1156]: time="2023-10-02T20:23:55.179560755Z" level=info msg="cleaning up dead shim" Oct 2 20:23:55.184255 env[1156]: time="2023-10-02T20:23:55.184227408Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:23:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2389 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:23:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:23:55.184472 env[1156]: time="2023-10-02T20:23:55.184389178Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 20:23:55.184591 env[1156]: time="2023-10-02T20:23:55.184514135Z" level=error msg="Failed to pipe stdout of container \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\"" error="reading from a closed fifo" Oct 2 20:23:55.184591 env[1156]: time="2023-10-02T20:23:55.184532472Z" level=error msg="Failed to pipe stderr of container \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\"" error="reading from a closed fifo" Oct 2 20:23:55.185231 env[1156]: time="2023-10-02T20:23:55.185208865Z" level=error msg="StartContainer for \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:23:55.185397 kubelet[1549]: E1002 20:23:55.185384 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81" Oct 2 20:23:55.185477 kubelet[1549]: E1002 20:23:55.185470 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:23:55.185477 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:23:55.185477 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:23:55.185477 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bb9n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:23:55.185640 kubelet[1549]: E1002 20:23:55.185498 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:23:55.387390 env[1156]: time="2023-10-02T20:23:55.387183403Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:23:55.396418 env[1156]: time="2023-10-02T20:23:55.396373502Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\"" Oct 2 20:23:55.396628 env[1156]: time="2023-10-02T20:23:55.396574446Z" level=info msg="StartContainer for \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\"" Oct 2 20:23:55.409495 kubelet[1549]: W1002 20:23:55.409419 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8faacef_d27b_4eae_ab0a_ae0ae4a80f85.slice/cri-containerd-65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279.scope WatchSource:0}: container "65554c22cbdbddb81d0549f4d18e57af32e2491aac865a6c7a4e5f9f0eb0a279" in namespace "k8s.io": not found Oct 2 20:23:55.427518 systemd[1]: Started cri-containerd-9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37.scope. Oct 2 20:23:55.449084 systemd[1]: cri-containerd-9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37.scope: Deactivated successfully. Oct 2 20:23:55.471248 env[1156]: time="2023-10-02T20:23:55.471109570Z" level=info msg="shim disconnected" id=9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37 Oct 2 20:23:55.471248 env[1156]: time="2023-10-02T20:23:55.471228420Z" level=warning msg="cleaning up after shim disconnected" id=9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37 namespace=k8s.io Oct 2 20:23:55.471785 env[1156]: time="2023-10-02T20:23:55.471259052Z" level=info msg="cleaning up dead shim" Oct 2 20:23:55.499651 env[1156]: time="2023-10-02T20:23:55.499498684Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:23:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2427 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:23:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:23:55.500249 env[1156]: time="2023-10-02T20:23:55.500065054Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 20:23:55.500701 env[1156]: time="2023-10-02T20:23:55.500550718Z" level=error msg="Failed to pipe stdout of container \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\"" error="reading from a closed fifo" Oct 2 20:23:55.500914 env[1156]: time="2023-10-02T20:23:55.500647145Z" level=error msg="Failed to pipe stderr of container \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\"" error="reading from a closed fifo" Oct 2 20:23:55.502445 env[1156]: time="2023-10-02T20:23:55.502277762Z" level=error msg="StartContainer for \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:23:55.502839 kubelet[1549]: E1002 20:23:55.502733 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37" Oct 2 20:23:55.503110 kubelet[1549]: E1002 20:23:55.502989 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:23:55.503110 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:23:55.503110 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:23:55.503110 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bb9n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:23:55.503693 kubelet[1549]: E1002 20:23:55.503087 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:23:55.814544 kubelet[1549]: E1002 20:23:55.814282 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:56.384051 kubelet[1549]: I1002 20:23:56.384001 1549 scope.go:115] "RemoveContainer" containerID="ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81" Oct 2 20:23:56.386340 kubelet[1549]: I1002 20:23:56.384164 1549 scope.go:115] "RemoveContainer" containerID="ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81" Oct 2 20:23:56.387156 env[1156]: time="2023-10-02T20:23:56.387095380Z" level=info msg="RemoveContainer for \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\"" Oct 2 20:23:56.387386 env[1156]: time="2023-10-02T20:23:56.387290783Z" level=info msg="RemoveContainer for \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\"" Oct 2 20:23:56.387386 env[1156]: time="2023-10-02T20:23:56.387335698Z" level=error msg="RemoveContainer for \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\" failed" error="failed to set removing state for container \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\": container is already in removing state" Oct 2 20:23:56.387467 kubelet[1549]: E1002 20:23:56.387407 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\": container is already in removing state" containerID="ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81" Oct 2 20:23:56.387467 kubelet[1549]: E1002 20:23:56.387425 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81": container is already in removing state; Skipping pod "cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)" Oct 2 20:23:56.387574 kubelet[1549]: E1002 20:23:56.387566 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:23:56.388969 env[1156]: time="2023-10-02T20:23:56.388950583Z" level=info msg="RemoveContainer for \"ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81\" returns successfully" Oct 2 20:23:56.392304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389435199.mount: Deactivated successfully. Oct 2 20:23:56.815439 kubelet[1549]: E1002 20:23:56.815341 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:56.826512 env[1156]: time="2023-10-02T20:23:56.826461224Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:23:56.827127 env[1156]: time="2023-10-02T20:23:56.827062229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:23:56.827910 env[1156]: time="2023-10-02T20:23:56.827870590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:23:56.828244 env[1156]: time="2023-10-02T20:23:56.828200921Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 20:23:56.829194 env[1156]: time="2023-10-02T20:23:56.829148938Z" level=info msg="CreateContainer within sandbox \"a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 20:23:56.834262 env[1156]: time="2023-10-02T20:23:56.834226851Z" level=info msg="CreateContainer within sandbox \"a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\"" Oct 2 20:23:56.834699 env[1156]: time="2023-10-02T20:23:56.834620863Z" level=info msg="StartContainer for \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\"" Oct 2 20:23:56.853790 systemd[1]: Started cri-containerd-dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030.scope. Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.885111 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 20:23:56.885196 kernel: audit: type=1400 audit(1696278236.860:677): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.002990 kernel: audit: type=1400 audit(1696278236.860:678): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.003036 kernel: audit: type=1400 audit(1696278236.860:679): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062810 kernel: audit: type=1400 audit(1696278236.860:680): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.122242 kernel: audit: type=1400 audit(1696278236.860:681): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.181036 kernel: audit: type=1400 audit(1696278236.860:682): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.239884 kernel: audit: type=1400 audit(1696278236.860:683): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.298644 kernel: audit: type=1400 audit(1696278236.860:684): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.357456 kernel: audit: type=1400 audit(1696278236.860:685): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.386828 kubelet[1549]: E1002 20:23:57.386803 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:23:57.416140 kernel: audit: type=1400 audit(1696278236.942:686): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.942000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.942000 audit: BPF prog-id=82 op=LOAD Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2297 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:56.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463336139653133616161663533376535623330316338313431346130 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2297 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:56.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463336139653133616161663533376535623330316338313431346130 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:56.943000 audit: BPF prog-id=83 op=LOAD Oct 2 20:23:56.943000 audit[2446]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000333050 items=0 ppid=2297 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:56.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463336139653133616161663533376535623330316338313431346130 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.062000 audit: BPF prog-id=84 op=LOAD Oct 2 20:23:57.062000 audit[2446]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000333098 items=0 ppid=2297 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:57.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463336139653133616161663533376535623330316338313431346130 Oct 2 20:23:57.180000 audit: BPF prog-id=84 op=UNLOAD Oct 2 20:23:57.180000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { perfmon } for pid=2446 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit[2446]: AVC avc: denied { bpf } for pid=2446 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:23:57.180000 audit: BPF prog-id=85 op=LOAD Oct 2 20:23:57.180000 audit[2446]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003334a8 items=0 ppid=2297 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:23:57.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463336139653133616161663533376535623330316338313431346130 Oct 2 20:23:57.492104 env[1156]: time="2023-10-02T20:23:57.492049258Z" level=info msg="StartContainer for \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\" returns successfully" Oct 2 20:23:57.502000 audit[2456]: AVC avc: denied { map_create } for pid=2456 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c195,c894 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c195,c894 tclass=bpf permissive=0 Oct 2 20:23:57.502000 audit[2456]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0003437d0 a2=48 a3=c0003437c0 items=0 ppid=2297 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c195,c894 key=(null) Oct 2 20:23:57.502000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 20:23:57.816761 kubelet[1549]: E1002 20:23:57.816572 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:58.582319 kubelet[1549]: W1002 20:23:58.582230 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c4a188_e695_4d81_baf6_9b5d853a5d88.slice/cri-containerd-ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81.scope WatchSource:0}: container "ba490165c0089943139886f7e419f56af07267853adb31c95b2557963f75ec81" in namespace "k8s.io": not found Oct 2 20:23:58.817446 kubelet[1549]: E1002 20:23:58.817370 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:59.674753 kubelet[1549]: E1002 20:23:59.674651 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:23:59.772221 kubelet[1549]: E1002 20:23:59.772150 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:23:59.817796 kubelet[1549]: E1002 20:23:59.817716 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:00.818212 kubelet[1549]: E1002 20:24:00.818131 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:01.692594 kubelet[1549]: W1002 20:24:01.692514 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c4a188_e695_4d81_baf6_9b5d853a5d88.slice/cri-containerd-9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37.scope WatchSource:0}: task 9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37 not found: not found Oct 2 20:24:01.819461 kubelet[1549]: E1002 20:24:01.819325 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:02.820207 kubelet[1549]: E1002 20:24:02.820106 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:03.821084 kubelet[1549]: E1002 20:24:03.820964 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:04.774427 kubelet[1549]: E1002 20:24:04.774311 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:04.821927 kubelet[1549]: E1002 20:24:04.821826 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:05.822824 kubelet[1549]: E1002 20:24:05.822718 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:06.823786 kubelet[1549]: E1002 20:24:06.823682 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:07.824831 kubelet[1549]: E1002 20:24:07.824728 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:08.825879 kubelet[1549]: E1002 20:24:08.825780 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:09.775656 kubelet[1549]: E1002 20:24:09.775558 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:09.826775 kubelet[1549]: E1002 20:24:09.826672 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:09.878169 env[1156]: time="2023-10-02T20:24:09.878035908Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:24:09.895675 env[1156]: time="2023-10-02T20:24:09.895545728Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\"" Oct 2 20:24:09.896327 env[1156]: time="2023-10-02T20:24:09.896274325Z" level=info msg="StartContainer for \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\"" Oct 2 20:24:09.930365 systemd[1]: Started cri-containerd-fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d.scope. Oct 2 20:24:09.949591 systemd[1]: cri-containerd-fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d.scope: Deactivated successfully. Oct 2 20:24:09.959231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d-rootfs.mount: Deactivated successfully. Oct 2 20:24:10.118355 env[1156]: time="2023-10-02T20:24:10.118204234Z" level=info msg="shim disconnected" id=fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d Oct 2 20:24:10.118355 env[1156]: time="2023-10-02T20:24:10.118319251Z" level=warning msg="cleaning up after shim disconnected" id=fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d namespace=k8s.io Oct 2 20:24:10.118355 env[1156]: time="2023-10-02T20:24:10.118347819Z" level=info msg="cleaning up dead shim" Oct 2 20:24:10.147792 env[1156]: time="2023-10-02T20:24:10.147620270Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:24:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2514 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:24:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:24:10.148393 env[1156]: time="2023-10-02T20:24:10.148224968Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:24:10.148905 env[1156]: time="2023-10-02T20:24:10.148750481Z" level=error msg="Failed to pipe stdout of container \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\"" error="reading from a closed fifo" Oct 2 20:24:10.148905 env[1156]: time="2023-10-02T20:24:10.148780376Z" level=error msg="Failed to pipe stderr of container \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\"" error="reading from a closed fifo" Oct 2 20:24:10.150241 env[1156]: time="2023-10-02T20:24:10.150124331Z" level=error msg="StartContainer for \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:24:10.150680 kubelet[1549]: E1002 20:24:10.150613 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d" Oct 2 20:24:10.150987 kubelet[1549]: E1002 20:24:10.150832 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:24:10.150987 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:24:10.150987 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:24:10.150987 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bb9n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:24:10.151833 kubelet[1549]: E1002 20:24:10.150927 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:24:10.426579 kubelet[1549]: I1002 20:24:10.426389 1549 scope.go:115] "RemoveContainer" containerID="9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37" Oct 2 20:24:10.427283 kubelet[1549]: I1002 20:24:10.427227 1549 scope.go:115] "RemoveContainer" containerID="9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37" Oct 2 20:24:10.429146 env[1156]: time="2023-10-02T20:24:10.429016106Z" level=info msg="RemoveContainer for \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\"" Oct 2 20:24:10.429961 env[1156]: time="2023-10-02T20:24:10.429840366Z" level=info msg="RemoveContainer for \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\"" Oct 2 20:24:10.430281 env[1156]: time="2023-10-02T20:24:10.430152799Z" level=error msg="RemoveContainer for \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\" failed" error="failed to set removing state for container \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\": container is already in removing state" Oct 2 20:24:10.430639 kubelet[1549]: E1002 20:24:10.430566 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\": container is already in removing state" containerID="9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37" Oct 2 20:24:10.430639 kubelet[1549]: E1002 20:24:10.430632 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37": container is already in removing state; Skipping pod "cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)" Oct 2 20:24:10.431485 kubelet[1549]: E1002 20:24:10.431385 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:24:10.433739 env[1156]: time="2023-10-02T20:24:10.433636519Z" level=info msg="RemoveContainer for \"9d566115730a7aa492aea4022d754567bd6b5b9b67706ed8e98202666d3baf37\" returns successfully" Oct 2 20:24:10.827678 kubelet[1549]: E1002 20:24:10.827584 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:11.828815 kubelet[1549]: E1002 20:24:11.828707 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:12.829179 kubelet[1549]: E1002 20:24:12.829061 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:13.226594 kubelet[1549]: W1002 20:24:13.226355 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c4a188_e695_4d81_baf6_9b5d853a5d88.slice/cri-containerd-fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d.scope WatchSource:0}: task fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d not found: not found Oct 2 20:24:13.829640 kubelet[1549]: E1002 20:24:13.829536 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:14.777739 kubelet[1549]: E1002 20:24:14.777619 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:14.829858 kubelet[1549]: E1002 20:24:14.829713 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:15.830851 kubelet[1549]: E1002 20:24:15.830740 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:16.832111 kubelet[1549]: E1002 20:24:16.832006 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:17.833166 kubelet[1549]: E1002 20:24:17.833059 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:18.834182 kubelet[1549]: E1002 20:24:18.834076 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:19.675237 kubelet[1549]: E1002 20:24:19.675124 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:19.778623 kubelet[1549]: E1002 20:24:19.778528 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:19.835145 kubelet[1549]: E1002 20:24:19.835040 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:20.835338 kubelet[1549]: E1002 20:24:20.835227 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:21.835900 kubelet[1549]: E1002 20:24:21.835795 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:22.836797 kubelet[1549]: E1002 20:24:22.836694 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:23.837812 kubelet[1549]: E1002 20:24:23.837702 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:24.780721 kubelet[1549]: E1002 20:24:24.780669 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:24.838603 kubelet[1549]: E1002 20:24:24.838499 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:24.873257 kubelet[1549]: E1002 20:24:24.873149 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:24:25.839208 kubelet[1549]: E1002 20:24:25.839088 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:26.840463 kubelet[1549]: E1002 20:24:26.840315 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:27.840991 kubelet[1549]: E1002 20:24:27.840887 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:28.842062 kubelet[1549]: E1002 20:24:28.841957 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:29.781992 kubelet[1549]: E1002 20:24:29.781899 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:29.843012 kubelet[1549]: E1002 20:24:29.842911 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:30.844159 kubelet[1549]: E1002 20:24:30.844052 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:31.844841 kubelet[1549]: E1002 20:24:31.844734 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:32.846053 kubelet[1549]: E1002 20:24:32.845981 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:33.846269 kubelet[1549]: E1002 20:24:33.846156 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:34.783694 kubelet[1549]: E1002 20:24:34.783632 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:34.846880 kubelet[1549]: E1002 20:24:34.846778 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:35.847350 kubelet[1549]: E1002 20:24:35.847247 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:36.848451 kubelet[1549]: E1002 20:24:36.848342 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:37.849254 kubelet[1549]: E1002 20:24:37.849153 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:38.849522 kubelet[1549]: E1002 20:24:38.849457 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:38.877514 env[1156]: time="2023-10-02T20:24:38.877353941Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:24:38.891062 env[1156]: time="2023-10-02T20:24:38.890870604Z" level=info msg="CreateContainer within sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\"" Oct 2 20:24:38.891331 env[1156]: time="2023-10-02T20:24:38.891315913Z" level=info msg="StartContainer for \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\"" Oct 2 20:24:38.921695 systemd[1]: Started cri-containerd-648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce.scope. Oct 2 20:24:38.925682 systemd[1]: cri-containerd-648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce.scope: Deactivated successfully. Oct 2 20:24:38.925819 systemd[1]: Stopped cri-containerd-648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce.scope. Oct 2 20:24:38.930073 env[1156]: time="2023-10-02T20:24:38.930014854Z" level=info msg="shim disconnected" id=648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce Oct 2 20:24:38.930073 env[1156]: time="2023-10-02T20:24:38.930043537Z" level=warning msg="cleaning up after shim disconnected" id=648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce namespace=k8s.io Oct 2 20:24:38.930073 env[1156]: time="2023-10-02T20:24:38.930049088Z" level=info msg="cleaning up dead shim" Oct 2 20:24:38.945392 env[1156]: time="2023-10-02T20:24:38.945340557Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2554 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:24:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:24:38.945541 env[1156]: time="2023-10-02T20:24:38.945483462Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:24:38.945669 env[1156]: time="2023-10-02T20:24:38.945611001Z" level=error msg="Failed to pipe stderr of container \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\"" error="reading from a closed fifo" Oct 2 20:24:38.945669 env[1156]: time="2023-10-02T20:24:38.945628829Z" level=error msg="Failed to pipe stdout of container \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\"" error="reading from a closed fifo" Oct 2 20:24:38.946305 env[1156]: time="2023-10-02T20:24:38.946249927Z" level=error msg="StartContainer for \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:24:38.946457 kubelet[1549]: E1002 20:24:38.946398 1549 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce" Oct 2 20:24:38.946519 kubelet[1549]: E1002 20:24:38.946468 1549 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:24:38.946519 kubelet[1549]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:24:38.946519 kubelet[1549]: rm /hostbin/cilium-mount Oct 2 20:24:38.946519 kubelet[1549]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bb9n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:24:38.946629 kubelet[1549]: E1002 20:24:38.946497 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:24:39.505029 kubelet[1549]: I1002 20:24:39.504934 1549 scope.go:115] "RemoveContainer" containerID="fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d" Oct 2 20:24:39.505764 kubelet[1549]: I1002 20:24:39.505680 1549 scope.go:115] "RemoveContainer" containerID="fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d" Oct 2 20:24:39.507775 env[1156]: time="2023-10-02T20:24:39.507670794Z" level=info msg="RemoveContainer for \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\"" Oct 2 20:24:39.508795 env[1156]: time="2023-10-02T20:24:39.508664824Z" level=info msg="RemoveContainer for \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\"" Oct 2 20:24:39.509005 env[1156]: time="2023-10-02T20:24:39.508898493Z" level=error msg="RemoveContainer for \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\" failed" error="failed to set removing state for container \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\": container is already in removing state" Oct 2 20:24:39.509296 kubelet[1549]: E1002 20:24:39.509241 1549 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\": container is already in removing state" containerID="fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d" Oct 2 20:24:39.509483 kubelet[1549]: E1002 20:24:39.509317 1549 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d": container is already in removing state; Skipping pod "cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)" Oct 2 20:24:39.510029 kubelet[1549]: E1002 20:24:39.509960 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:24:39.526191 env[1156]: time="2023-10-02T20:24:39.526084286Z" level=info msg="RemoveContainer for \"fafb637d5dd70a116e98acea42e49b462570c2d40c840cc0038b9c56b6392f0d\" returns successfully" Oct 2 20:24:39.674810 kubelet[1549]: E1002 20:24:39.674698 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:39.698067 env[1156]: time="2023-10-02T20:24:39.697935868Z" level=info msg="StopPodSandbox for \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\"" Oct 2 20:24:39.698344 env[1156]: time="2023-10-02T20:24:39.698146768Z" level=info msg="TearDown network for sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" successfully" Oct 2 20:24:39.698344 env[1156]: time="2023-10-02T20:24:39.698261064Z" level=info msg="StopPodSandbox for \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" returns successfully" Oct 2 20:24:39.699125 env[1156]: time="2023-10-02T20:24:39.699017072Z" level=info msg="RemovePodSandbox for \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\"" Oct 2 20:24:39.699324 env[1156]: time="2023-10-02T20:24:39.699093176Z" level=info msg="Forcibly stopping sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\"" Oct 2 20:24:39.699324 env[1156]: time="2023-10-02T20:24:39.699289990Z" level=info msg="TearDown network for sandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" successfully" Oct 2 20:24:39.708011 env[1156]: time="2023-10-02T20:24:39.707893869Z" level=info msg="RemovePodSandbox \"13e9dbad691944ec09061dd2fc6fcea7cfe4e4c4042e8cc92dd6ac34ac223c37\" returns successfully" Oct 2 20:24:39.708812 env[1156]: time="2023-10-02T20:24:39.708708051Z" level=info msg="StopPodSandbox for \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\"" Oct 2 20:24:39.709031 env[1156]: time="2023-10-02T20:24:39.708904962Z" level=info msg="TearDown network for sandbox \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" successfully" Oct 2 20:24:39.709164 env[1156]: time="2023-10-02T20:24:39.709010999Z" level=info msg="StopPodSandbox for \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" returns successfully" Oct 2 20:24:39.709840 env[1156]: time="2023-10-02T20:24:39.709728028Z" level=info msg="RemovePodSandbox for \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\"" Oct 2 20:24:39.710052 env[1156]: time="2023-10-02T20:24:39.709806865Z" level=info msg="Forcibly stopping sandbox \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\"" Oct 2 20:24:39.710052 env[1156]: time="2023-10-02T20:24:39.710003401Z" level=info msg="TearDown network for sandbox \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" successfully" Oct 2 20:24:39.713270 env[1156]: time="2023-10-02T20:24:39.713169911Z" level=info msg="RemovePodSandbox \"512d60a316fb6f24eaa0ea30910b686b80f0d3dece553cbabd10fb4cbbdb1e02\" returns successfully" Oct 2 20:24:39.785942 kubelet[1549]: E1002 20:24:39.785740 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:39.850429 kubelet[1549]: E1002 20:24:39.850296 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:39.889973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce-rootfs.mount: Deactivated successfully. Oct 2 20:24:40.851160 kubelet[1549]: E1002 20:24:40.851057 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:41.852080 kubelet[1549]: E1002 20:24:41.851977 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:42.038119 kubelet[1549]: W1002 20:24:42.037995 1549 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c4a188_e695_4d81_baf6_9b5d853a5d88.slice/cri-containerd-648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce.scope WatchSource:0}: task 648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce not found: not found Oct 2 20:24:42.852398 kubelet[1549]: E1002 20:24:42.852294 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:43.852946 kubelet[1549]: E1002 20:24:43.852835 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:44.786892 kubelet[1549]: E1002 20:24:44.786792 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:44.853985 kubelet[1549]: E1002 20:24:44.853878 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:45.854606 kubelet[1549]: E1002 20:24:45.854487 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:46.855520 kubelet[1549]: E1002 20:24:46.855393 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:47.855830 kubelet[1549]: E1002 20:24:47.855723 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:48.856056 kubelet[1549]: E1002 20:24:48.855944 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:49.788099 kubelet[1549]: E1002 20:24:49.788001 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:49.856519 kubelet[1549]: E1002 20:24:49.856379 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:50.857569 kubelet[1549]: E1002 20:24:50.857456 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:51.858272 kubelet[1549]: E1002 20:24:51.858167 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:51.874064 kubelet[1549]: E1002 20:24:51.873956 1549 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-8tk78_kube-system(a9c4a188-e695-4d81-baf6-9b5d853a5d88)\"" pod="kube-system/cilium-8tk78" podUID=a9c4a188-e695-4d81-baf6-9b5d853a5d88 Oct 2 20:24:52.859137 kubelet[1549]: E1002 20:24:52.859033 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:53.859276 kubelet[1549]: E1002 20:24:53.859176 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:54.789711 kubelet[1549]: E1002 20:24:54.789613 1549 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:24:54.860260 kubelet[1549]: E1002 20:24:54.860151 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:55.861222 kubelet[1549]: E1002 20:24:55.861121 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:56.147371 env[1156]: time="2023-10-02T20:24:56.147108377Z" level=info msg="StopPodSandbox for \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\"" Oct 2 20:24:56.147371 env[1156]: time="2023-10-02T20:24:56.147288802Z" level=info msg="Container to stop \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:24:56.148328 env[1156]: time="2023-10-02T20:24:56.147634289Z" level=info msg="StopContainer for \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\" with timeout 30 (s)" Oct 2 20:24:56.148490 env[1156]: time="2023-10-02T20:24:56.148381389Z" level=info msg="Stop container \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\" with signal terminated" Oct 2 20:24:56.151086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f-shm.mount: Deactivated successfully. Oct 2 20:24:56.165749 systemd[1]: cri-containerd-a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f.scope: Deactivated successfully. Oct 2 20:24:56.165000 audit: BPF prog-id=78 op=UNLOAD Oct 2 20:24:56.191277 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 20:24:56.191360 kernel: audit: type=1334 audit(1696278296.165:696): prog-id=78 op=UNLOAD Oct 2 20:24:56.224000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:24:56.224965 systemd[1]: cri-containerd-dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030.scope: Deactivated successfully. Oct 2 20:24:56.226607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f-rootfs.mount: Deactivated successfully. Oct 2 20:24:56.250393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030-rootfs.mount: Deactivated successfully. Oct 2 20:24:56.224000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:24:56.276101 kernel: audit: type=1334 audit(1696278296.224:697): prog-id=81 op=UNLOAD Oct 2 20:24:56.276143 kernel: audit: type=1334 audit(1696278296.224:698): prog-id=82 op=UNLOAD Oct 2 20:24:56.277000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:24:56.303469 kernel: audit: type=1334 audit(1696278296.277:699): prog-id=85 op=UNLOAD Oct 2 20:24:56.307886 env[1156]: time="2023-10-02T20:24:56.307861020Z" level=info msg="shim disconnected" id=a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f Oct 2 20:24:56.307946 env[1156]: time="2023-10-02T20:24:56.307888733Z" level=warning msg="cleaning up after shim disconnected" id=a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f namespace=k8s.io Oct 2 20:24:56.307946 env[1156]: time="2023-10-02T20:24:56.307894717Z" level=info msg="cleaning up dead shim" Oct 2 20:24:56.311443 env[1156]: time="2023-10-02T20:24:56.311426494Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:24:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2606 runtime=io.containerd.runc.v2\n" Oct 2 20:24:56.311580 env[1156]: time="2023-10-02T20:24:56.311568477Z" level=info msg="TearDown network for sandbox \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" successfully" Oct 2 20:24:56.311607 env[1156]: time="2023-10-02T20:24:56.311580460Z" level=info msg="StopPodSandbox for \"a60f49689442b300bd429e77f7c13fa0ae65ba324b3e78eec35406277560a49f\" returns successfully" Oct 2 20:24:56.323875 env[1156]: time="2023-10-02T20:24:56.323816430Z" level=info msg="shim disconnected" id=dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030 Oct 2 20:24:56.323875 env[1156]: time="2023-10-02T20:24:56.323835984Z" level=warning msg="cleaning up after shim disconnected" id=dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030 namespace=k8s.io Oct 2 20:24:56.323875 env[1156]: time="2023-10-02T20:24:56.323841555Z" level=info msg="cleaning up dead shim" Oct 2 20:24:56.327339 env[1156]: time="2023-10-02T20:24:56.327323280Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:24:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2618 runtime=io.containerd.runc.v2\n" Oct 2 20:24:56.328096 env[1156]: time="2023-10-02T20:24:56.328051444Z" level=info msg="StopContainer for \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\" returns successfully" Oct 2 20:24:56.328323 env[1156]: time="2023-10-02T20:24:56.328285994Z" level=info msg="StopPodSandbox for \"a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a\"" Oct 2 20:24:56.328323 env[1156]: time="2023-10-02T20:24:56.328314769Z" level=info msg="Container to stop \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:24:56.329109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a-shm.mount: Deactivated successfully. Oct 2 20:24:56.342878 systemd[1]: cri-containerd-a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a.scope: Deactivated successfully. Oct 2 20:24:56.342000 audit: BPF prog-id=74 op=UNLOAD Oct 2 20:24:56.369634 kernel: audit: type=1334 audit(1696278296.342:700): prog-id=74 op=UNLOAD Oct 2 20:24:56.374852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a-rootfs.mount: Deactivated successfully. Oct 2 20:24:56.376605 env[1156]: time="2023-10-02T20:24:56.376579920Z" level=info msg="shim disconnected" id=a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a Oct 2 20:24:56.376690 env[1156]: time="2023-10-02T20:24:56.376606533Z" level=warning msg="cleaning up after shim disconnected" id=a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a namespace=k8s.io Oct 2 20:24:56.376690 env[1156]: time="2023-10-02T20:24:56.376612908Z" level=info msg="cleaning up dead shim" Oct 2 20:24:56.377000 audit: BPF prog-id=77 op=UNLOAD Oct 2 20:24:56.404630 kernel: audit: type=1334 audit(1696278296.377:701): prog-id=77 op=UNLOAD Oct 2 20:24:56.417334 env[1156]: time="2023-10-02T20:24:56.417286697Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:24:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2647 runtime=io.containerd.runc.v2\n" Oct 2 20:24:56.417500 env[1156]: time="2023-10-02T20:24:56.417441706Z" level=info msg="TearDown network for sandbox \"a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a\" successfully" Oct 2 20:24:56.417500 env[1156]: time="2023-10-02T20:24:56.417456046Z" level=info msg="StopPodSandbox for \"a66b6cc108d118870cf2950a8bf4e7542dbb1a4714e0545efd80565f85e05c2a\" returns successfully" Oct 2 20:24:56.498170 kubelet[1549]: I1002 20:24:56.498069 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-etc-cni-netd\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.498170 kubelet[1549]: I1002 20:24:56.498179 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hubble-tls\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.498678 kubelet[1549]: I1002 20:24:56.498241 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-lib-modules\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.498678 kubelet[1549]: I1002 20:24:56.498197 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.498678 kubelet[1549]: I1002 20:24:56.498308 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f12c4527-d3ac-4425-a515-0f53d39daccf-cilium-config-path\") pod \"f12c4527-d3ac-4425-a515-0f53d39daccf\" (UID: \"f12c4527-d3ac-4425-a515-0f53d39daccf\") " Oct 2 20:24:56.498678 kubelet[1549]: I1002 20:24:56.498349 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.498678 kubelet[1549]: I1002 20:24:56.498366 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hostproc\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.499236 kubelet[1549]: I1002 20:24:56.498432 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hostproc" (OuterVolumeSpecName: "hostproc") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.499236 kubelet[1549]: I1002 20:24:56.498520 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-net\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.499236 kubelet[1549]: I1002 20:24:56.498600 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-clustermesh-secrets\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.499236 kubelet[1549]: I1002 20:24:56.498588 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.499236 kubelet[1549]: I1002 20:24:56.498665 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-config-path\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.499236 kubelet[1549]: I1002 20:24:56.498723 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-run\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.499917 kubelet[1549]: I1002 20:24:56.498777 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-bpf-maps\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.499917 kubelet[1549]: I1002 20:24:56.498842 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdr2p\" (UniqueName: \"kubernetes.io/projected/f12c4527-d3ac-4425-a515-0f53d39daccf-kube-api-access-jdr2p\") pod \"f12c4527-d3ac-4425-a515-0f53d39daccf\" (UID: \"f12c4527-d3ac-4425-a515-0f53d39daccf\") " Oct 2 20:24:56.499917 kubelet[1549]: W1002 20:24:56.498843 1549 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/f12c4527-d3ac-4425-a515-0f53d39daccf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:24:56.499917 kubelet[1549]: I1002 20:24:56.498842 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.499917 kubelet[1549]: I1002 20:24:56.498922 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-ipsec-secrets\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.499917 kubelet[1549]: I1002 20:24:56.498956 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.500577 kubelet[1549]: W1002 20:24:56.499036 1549 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/a9c4a188-e695-4d81-baf6-9b5d853a5d88/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:24:56.500577 kubelet[1549]: I1002 20:24:56.499049 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb9n5\" (UniqueName: \"kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-kube-api-access-bb9n5\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.500577 kubelet[1549]: I1002 20:24:56.499242 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cni-path\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.500577 kubelet[1549]: I1002 20:24:56.499353 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-cgroup\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.500577 kubelet[1549]: I1002 20:24:56.499339 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cni-path" (OuterVolumeSpecName: "cni-path") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.500577 kubelet[1549]: I1002 20:24:56.499475 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-xtables-lock\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.501204 kubelet[1549]: I1002 20:24:56.499584 1549 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-kernel\") pod \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\" (UID: \"a9c4a188-e695-4d81-baf6-9b5d853a5d88\") " Oct 2 20:24:56.501204 kubelet[1549]: I1002 20:24:56.499500 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.501204 kubelet[1549]: I1002 20:24:56.499564 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.501204 kubelet[1549]: I1002 20:24:56.499702 1549 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-cgroup\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.501204 kubelet[1549]: I1002 20:24:56.499680 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:24:56.501204 kubelet[1549]: I1002 20:24:56.499764 1549 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-etc-cni-netd\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.501863 kubelet[1549]: I1002 20:24:56.499823 1549 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-lib-modules\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.501863 kubelet[1549]: I1002 20:24:56.499876 1549 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hostproc\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.501863 kubelet[1549]: I1002 20:24:56.499933 1549 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-net\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.501863 kubelet[1549]: I1002 20:24:56.499990 1549 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-run\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.501863 kubelet[1549]: I1002 20:24:56.500046 1549 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-bpf-maps\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.501863 kubelet[1549]: I1002 20:24:56.500100 1549 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cni-path\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.504338 kubelet[1549]: I1002 20:24:56.504325 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12c4527-d3ac-4425-a515-0f53d39daccf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f12c4527-d3ac-4425-a515-0f53d39daccf" (UID: "f12c4527-d3ac-4425-a515-0f53d39daccf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:24:56.504420 kubelet[1549]: I1002 20:24:56.504406 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:24:56.504420 kubelet[1549]: I1002 20:24:56.504406 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:24:56.504506 kubelet[1549]: I1002 20:24:56.504439 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:24:56.504506 kubelet[1549]: I1002 20:24:56.504440 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12c4527-d3ac-4425-a515-0f53d39daccf-kube-api-access-jdr2p" (OuterVolumeSpecName: "kube-api-access-jdr2p") pod "f12c4527-d3ac-4425-a515-0f53d39daccf" (UID: "f12c4527-d3ac-4425-a515-0f53d39daccf"). InnerVolumeSpecName "kube-api-access-jdr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:24:56.504710 kubelet[1549]: I1002 20:24:56.504650 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-kube-api-access-bb9n5" (OuterVolumeSpecName: "kube-api-access-bb9n5") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "kube-api-access-bb9n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:24:56.504766 kubelet[1549]: I1002 20:24:56.504704 1549 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a9c4a188-e695-4d81-baf6-9b5d853a5d88" (UID: "a9c4a188-e695-4d81-baf6-9b5d853a5d88"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:24:56.548097 kubelet[1549]: I1002 20:24:56.548042 1549 scope.go:115] "RemoveContainer" containerID="dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030" Oct 2 20:24:56.548912 env[1156]: time="2023-10-02T20:24:56.548878964Z" level=info msg="RemoveContainer for \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\"" Oct 2 20:24:56.550295 env[1156]: time="2023-10-02T20:24:56.550265921Z" level=info msg="RemoveContainer for \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\" returns successfully" Oct 2 20:24:56.550389 kubelet[1549]: I1002 20:24:56.550374 1549 scope.go:115] "RemoveContainer" containerID="dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030" Oct 2 20:24:56.550640 env[1156]: time="2023-10-02T20:24:56.550570383Z" level=error msg="ContainerStatus for \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\": not found" Oct 2 20:24:56.550752 kubelet[1549]: E1002 20:24:56.550737 1549 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\": not found" containerID="dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030" Oct 2 20:24:56.550834 kubelet[1549]: I1002 20:24:56.550769 1549 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030} err="failed to get container status \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc3a9e13aaaf537e5b301c81414a08ba21ecc67c40247d3cbe46df1aeecaa030\": not found" Oct 2 20:24:56.550834 kubelet[1549]: I1002 20:24:56.550787 1549 scope.go:115] "RemoveContainer" containerID="648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce" Oct 2 20:24:56.551593 systemd[1]: Removed slice kubepods-besteffort-podf12c4527_d3ac_4425_a515_0f53d39daccf.slice. Oct 2 20:24:56.551804 env[1156]: time="2023-10-02T20:24:56.551721852Z" level=info msg="RemoveContainer for \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\"" Oct 2 20:24:56.553147 systemd[1]: Removed slice kubepods-burstable-poda9c4a188_e695_4d81_baf6_9b5d853a5d88.slice. Oct 2 20:24:56.553294 env[1156]: time="2023-10-02T20:24:56.553164074Z" level=info msg="RemoveContainer for \"648346b6e84ab9a23e69f5539b174b23583b398c13a836e5d9fb8cce870246ce\" returns successfully" Oct 2 20:24:56.601364 kubelet[1549]: I1002 20:24:56.601265 1549 reconciler.go:399] "Volume detached for volume \"kube-api-access-jdr2p\" (UniqueName: \"kubernetes.io/projected/f12c4527-d3ac-4425-a515-0f53d39daccf-kube-api-access-jdr2p\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601364 kubelet[1549]: I1002 20:24:56.601328 1549 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-ipsec-secrets\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601364 kubelet[1549]: I1002 20:24:56.601362 1549 reconciler.go:399] "Volume detached for volume \"kube-api-access-bb9n5\" (UniqueName: \"kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-kube-api-access-bb9n5\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601903 kubelet[1549]: I1002 20:24:56.601394 1549 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-xtables-lock\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601903 kubelet[1549]: I1002 20:24:56.601449 1549 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9c4a188-e695-4d81-baf6-9b5d853a5d88-host-proc-sys-kernel\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601903 kubelet[1549]: I1002 20:24:56.601481 1549 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9c4a188-e695-4d81-baf6-9b5d853a5d88-hubble-tls\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601903 kubelet[1549]: I1002 20:24:56.601511 1549 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f12c4527-d3ac-4425-a515-0f53d39daccf-cilium-config-path\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601903 kubelet[1549]: I1002 20:24:56.601541 1549 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9c4a188-e695-4d81-baf6-9b5d853a5d88-clustermesh-secrets\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.601903 kubelet[1549]: I1002 20:24:56.601570 1549 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9c4a188-e695-4d81-baf6-9b5d853a5d88-cilium-config-path\") on node \"10.67.124.211\" DevicePath \"\"" Oct 2 20:24:56.862176 kubelet[1549]: E1002 20:24:56.862099 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:24:57.151392 systemd[1]: var-lib-kubelet-pods-f12c4527\x2dd3ac\x2d4425\x2da515\x2d0f53d39daccf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djdr2p.mount: Deactivated successfully. Oct 2 20:24:57.151668 systemd[1]: var-lib-kubelet-pods-a9c4a188\x2de695\x2d4d81\x2dbaf6\x2d9b5d853a5d88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbb9n5.mount: Deactivated successfully. Oct 2 20:24:57.151858 systemd[1]: var-lib-kubelet-pods-a9c4a188\x2de695\x2d4d81\x2dbaf6\x2d9b5d853a5d88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:24:57.152038 systemd[1]: var-lib-kubelet-pods-a9c4a188\x2de695\x2d4d81\x2dbaf6\x2d9b5d853a5d88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:24:57.152197 systemd[1]: var-lib-kubelet-pods-a9c4a188\x2de695\x2d4d81\x2dbaf6\x2d9b5d853a5d88-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.