Oct 2 23:58:07.580149 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 23:58:07.580162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 23:58:07.580170 kernel: BIOS-provided physical RAM map: Oct 2 23:58:07.580174 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Oct 2 23:58:07.580178 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Oct 2 23:58:07.580181 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Oct 2 23:58:07.580186 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Oct 2 23:58:07.580190 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Oct 2 23:58:07.580194 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000820e1fff] usable Oct 2 23:58:07.580198 kernel: BIOS-e820: [mem 0x00000000820e2000-0x00000000820e2fff] ACPI NVS Oct 2 23:58:07.580203 kernel: BIOS-e820: [mem 0x00000000820e3000-0x00000000820e3fff] reserved Oct 2 23:58:07.580207 kernel: BIOS-e820: [mem 0x00000000820e4000-0x000000008afccfff] usable Oct 2 23:58:07.580211 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Oct 2 23:58:07.580215 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Oct 2 23:58:07.580220 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Oct 2 23:58:07.580225 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Oct 2 23:58:07.580229 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Oct 2 23:58:07.580233 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Oct 2 23:58:07.580238 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 2 23:58:07.580242 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Oct 2 23:58:07.580246 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Oct 2 23:58:07.580251 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 2 23:58:07.580255 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Oct 2 23:58:07.580259 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Oct 2 23:58:07.580264 kernel: NX (Execute Disable) protection: active Oct 2 23:58:07.580268 kernel: SMBIOS 3.2.1 present. Oct 2 23:58:07.580273 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Oct 2 23:58:07.580278 kernel: tsc: Detected 3400.000 MHz processor Oct 2 23:58:07.580282 kernel: tsc: Detected 3399.906 MHz TSC Oct 2 23:58:07.580286 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 23:58:07.580291 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 23:58:07.580296 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Oct 2 23:58:07.580300 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 23:58:07.580305 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Oct 2 23:58:07.580309 kernel: Using GB pages for direct mapping Oct 2 23:58:07.580314 kernel: ACPI: Early table checksum verification disabled Oct 2 23:58:07.580319 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Oct 2 23:58:07.580323 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Oct 2 23:58:07.580328 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Oct 2 23:58:07.580332 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Oct 2 23:58:07.580339 kernel: ACPI: FACS 0x000000008C66CF80 000040 Oct 2 23:58:07.580344 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Oct 2 23:58:07.580349 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Oct 2 23:58:07.580354 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Oct 2 23:58:07.580359 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Oct 2 23:58:07.580364 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Oct 2 23:58:07.580371 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Oct 2 23:58:07.580376 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Oct 2 23:58:07.580381 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Oct 2 23:58:07.580386 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 23:58:07.580391 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Oct 2 23:58:07.580396 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Oct 2 23:58:07.580401 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 23:58:07.580406 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 23:58:07.580411 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Oct 2 23:58:07.580416 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Oct 2 23:58:07.580420 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Oct 2 23:58:07.580425 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Oct 2 23:58:07.580431 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Oct 2 23:58:07.580435 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Oct 2 23:58:07.580440 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Oct 2 23:58:07.580445 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Oct 2 23:58:07.580450 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Oct 2 23:58:07.580455 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Oct 2 23:58:07.580459 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Oct 2 23:58:07.580464 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Oct 2 23:58:07.580469 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Oct 2 23:58:07.580475 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Oct 2 23:58:07.580480 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Oct 2 23:58:07.580484 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Oct 2 23:58:07.580489 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Oct 2 23:58:07.580494 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Oct 2 23:58:07.580499 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Oct 2 23:58:07.580503 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Oct 2 23:58:07.580508 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Oct 2 23:58:07.580513 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Oct 2 23:58:07.580519 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Oct 2 23:58:07.580523 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Oct 2 23:58:07.580528 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Oct 2 23:58:07.580533 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Oct 2 23:58:07.580538 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Oct 2 23:58:07.580542 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Oct 2 23:58:07.580547 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Oct 2 23:58:07.580552 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Oct 2 23:58:07.580558 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Oct 2 23:58:07.580562 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Oct 2 23:58:07.580567 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Oct 2 23:58:07.580572 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Oct 2 23:58:07.580577 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Oct 2 23:58:07.580582 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Oct 2 23:58:07.580586 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Oct 2 23:58:07.580591 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Oct 2 23:58:07.580596 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Oct 2 23:58:07.580601 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Oct 2 23:58:07.580606 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Oct 2 23:58:07.580611 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Oct 2 23:58:07.580616 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Oct 2 23:58:07.580620 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Oct 2 23:58:07.580625 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Oct 2 23:58:07.580630 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Oct 2 23:58:07.580635 kernel: No NUMA configuration found Oct 2 23:58:07.580640 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Oct 2 23:58:07.580644 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Oct 2 23:58:07.580650 kernel: Zone ranges: Oct 2 23:58:07.580655 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 23:58:07.580660 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 2 23:58:07.580665 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Oct 2 23:58:07.580669 kernel: Movable zone start for each node Oct 2 23:58:07.580674 kernel: Early memory node ranges Oct 2 23:58:07.580679 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Oct 2 23:58:07.580684 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Oct 2 23:58:07.580688 kernel: node 0: [mem 0x0000000040400000-0x00000000820e1fff] Oct 2 23:58:07.580694 kernel: node 0: [mem 0x00000000820e4000-0x000000008afccfff] Oct 2 23:58:07.580699 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Oct 2 23:58:07.580703 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Oct 2 23:58:07.580708 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Oct 2 23:58:07.580713 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Oct 2 23:58:07.580718 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 23:58:07.580726 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Oct 2 23:58:07.580732 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Oct 2 23:58:07.580737 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Oct 2 23:58:07.580742 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Oct 2 23:58:07.580748 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Oct 2 23:58:07.580753 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Oct 2 23:58:07.580759 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Oct 2 23:58:07.580764 kernel: ACPI: PM-Timer IO Port: 0x1808 Oct 2 23:58:07.580769 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 2 23:58:07.580774 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 2 23:58:07.580779 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 2 23:58:07.580785 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 2 23:58:07.580790 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 2 23:58:07.580796 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 2 23:58:07.580801 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 2 23:58:07.580806 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 2 23:58:07.580811 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 2 23:58:07.580816 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 2 23:58:07.580821 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 2 23:58:07.580826 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 2 23:58:07.580832 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 2 23:58:07.580837 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 2 23:58:07.580842 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 2 23:58:07.580847 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 2 23:58:07.580853 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Oct 2 23:58:07.580858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 23:58:07.580863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 23:58:07.580868 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 23:58:07.580873 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 23:58:07.580879 kernel: TSC deadline timer available Oct 2 23:58:07.580884 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Oct 2 23:58:07.580889 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Oct 2 23:58:07.580895 kernel: Booting paravirtualized kernel on bare hardware Oct 2 23:58:07.580900 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 23:58:07.580905 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Oct 2 23:58:07.580910 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Oct 2 23:58:07.580915 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Oct 2 23:58:07.580920 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 2 23:58:07.580926 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Oct 2 23:58:07.580931 kernel: Policy zone: Normal Oct 2 23:58:07.580937 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 23:58:07.580942 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 23:58:07.580947 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Oct 2 23:58:07.580953 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Oct 2 23:58:07.580958 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 23:58:07.580963 kernel: Memory: 32724724K/33452980K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 728000K reserved, 0K cma-reserved) Oct 2 23:58:07.580969 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 2 23:58:07.580975 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 23:58:07.580980 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 23:58:07.580985 kernel: rcu: Hierarchical RCU implementation. Oct 2 23:58:07.580991 kernel: rcu: RCU event tracing is enabled. Oct 2 23:58:07.580996 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 2 23:58:07.581001 kernel: Rude variant of Tasks RCU enabled. Oct 2 23:58:07.581006 kernel: Tracing variant of Tasks RCU enabled. Oct 2 23:58:07.581012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 23:58:07.581018 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 2 23:58:07.581023 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Oct 2 23:58:07.581028 kernel: random: crng init done Oct 2 23:58:07.581033 kernel: Console: colour dummy device 80x25 Oct 2 23:58:07.581038 kernel: printk: console [tty0] enabled Oct 2 23:58:07.581043 kernel: printk: console [ttyS1] enabled Oct 2 23:58:07.581048 kernel: ACPI: Core revision 20210730 Oct 2 23:58:07.581054 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Oct 2 23:58:07.581059 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 23:58:07.581065 kernel: DMAR: Host address width 39 Oct 2 23:58:07.581070 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Oct 2 23:58:07.581075 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Oct 2 23:58:07.581080 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Oct 2 23:58:07.581085 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Oct 2 23:58:07.581090 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Oct 2 23:58:07.581096 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Oct 2 23:58:07.581101 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Oct 2 23:58:07.581106 kernel: x2apic enabled Oct 2 23:58:07.581112 kernel: Switched APIC routing to cluster x2apic. Oct 2 23:58:07.581117 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Oct 2 23:58:07.581122 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Oct 2 23:58:07.581128 kernel: CPU0: Thermal monitoring enabled (TM1) Oct 2 23:58:07.581133 kernel: process: using mwait in idle threads Oct 2 23:58:07.581138 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 23:58:07.581143 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 23:58:07.581148 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 23:58:07.581153 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Oct 2 23:58:07.581159 kernel: Spectre V2 : Mitigation: Enhanced IBRS Oct 2 23:58:07.581164 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 23:58:07.581169 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 2 23:58:07.581174 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 2 23:58:07.581179 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 23:58:07.581184 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 23:58:07.581189 kernel: TAA: Mitigation: TSX disabled Oct 2 23:58:07.581194 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Oct 2 23:58:07.581200 kernel: SRBDS: Mitigation: Microcode Oct 2 23:58:07.581205 kernel: GDS: Vulnerable: No microcode Oct 2 23:58:07.581210 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 23:58:07.581216 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 23:58:07.581221 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 23:58:07.581226 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 2 23:58:07.581231 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 2 23:58:07.581236 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 23:58:07.581242 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 2 23:58:07.581247 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 2 23:58:07.581252 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Oct 2 23:58:07.581257 kernel: Freeing SMP alternatives memory: 32K Oct 2 23:58:07.581262 kernel: pid_max: default: 32768 minimum: 301 Oct 2 23:58:07.581267 kernel: LSM: Security Framework initializing Oct 2 23:58:07.581272 kernel: SELinux: Initializing. Oct 2 23:58:07.581278 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 23:58:07.581283 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 23:58:07.581288 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Oct 2 23:58:07.581293 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 2 23:58:07.581298 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Oct 2 23:58:07.581303 kernel: ... version: 4 Oct 2 23:58:07.581309 kernel: ... bit width: 48 Oct 2 23:58:07.581314 kernel: ... generic registers: 4 Oct 2 23:58:07.581319 kernel: ... value mask: 0000ffffffffffff Oct 2 23:58:07.581324 kernel: ... max period: 00007fffffffffff Oct 2 23:58:07.581330 kernel: ... fixed-purpose events: 3 Oct 2 23:58:07.581335 kernel: ... event mask: 000000070000000f Oct 2 23:58:07.581340 kernel: signal: max sigframe size: 2032 Oct 2 23:58:07.581345 kernel: rcu: Hierarchical SRCU implementation. Oct 2 23:58:07.581350 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Oct 2 23:58:07.581355 kernel: smp: Bringing up secondary CPUs ... Oct 2 23:58:07.581361 kernel: x86: Booting SMP configuration: Oct 2 23:58:07.581366 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Oct 2 23:58:07.581373 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 23:58:07.581379 kernel: #9 #10 #11 #12 #13 #14 #15 Oct 2 23:58:07.581384 kernel: smp: Brought up 1 node, 16 CPUs Oct 2 23:58:07.581389 kernel: smpboot: Max logical packages: 1 Oct 2 23:58:07.581394 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Oct 2 23:58:07.581399 kernel: devtmpfs: initialized Oct 2 23:58:07.581404 kernel: x86/mm: Memory block size: 128MB Oct 2 23:58:07.581410 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x820e2000-0x820e2fff] (4096 bytes) Oct 2 23:58:07.581415 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Oct 2 23:58:07.581421 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 23:58:07.581426 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 2 23:58:07.581432 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 23:58:07.581437 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 23:58:07.581442 kernel: audit: initializing netlink subsys (disabled) Oct 2 23:58:07.581447 kernel: audit: type=2000 audit(1696291081.040:1): state=initialized audit_enabled=0 res=1 Oct 2 23:58:07.581452 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 23:58:07.581457 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 23:58:07.581462 kernel: cpuidle: using governor menu Oct 2 23:58:07.581469 kernel: ACPI: bus type PCI registered Oct 2 23:58:07.581474 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 23:58:07.581479 kernel: dca service started, version 1.12.1 Oct 2 23:58:07.581484 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Oct 2 23:58:07.581489 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Oct 2 23:58:07.581494 kernel: PCI: Using configuration type 1 for base access Oct 2 23:58:07.581500 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Oct 2 23:58:07.581505 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 23:58:07.581510 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 23:58:07.581516 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 23:58:07.581521 kernel: ACPI: Added _OSI(Module Device) Oct 2 23:58:07.581526 kernel: ACPI: Added _OSI(Processor Device) Oct 2 23:58:07.581531 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 23:58:07.581536 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 23:58:07.581541 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 23:58:07.581546 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 23:58:07.581552 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 23:58:07.581557 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Oct 2 23:58:07.581563 kernel: ACPI: Dynamic OEM Table Load: Oct 2 23:58:07.581568 kernel: ACPI: SSDT 0xFFFF9C1D4020D100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Oct 2 23:58:07.581573 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Oct 2 23:58:07.581578 kernel: ACPI: Dynamic OEM Table Load: Oct 2 23:58:07.581583 kernel: ACPI: SSDT 0xFFFF9C1D41AE2800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Oct 2 23:58:07.581588 kernel: ACPI: Dynamic OEM Table Load: Oct 2 23:58:07.581594 kernel: ACPI: SSDT 0xFFFF9C1D41A53800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Oct 2 23:58:07.581599 kernel: ACPI: Dynamic OEM Table Load: Oct 2 23:58:07.581604 kernel: ACPI: SSDT 0xFFFF9C1D41A55800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Oct 2 23:58:07.581609 kernel: ACPI: Dynamic OEM Table Load: Oct 2 23:58:07.581614 kernel: ACPI: SSDT 0xFFFF9C1D4014D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Oct 2 23:58:07.581620 kernel: ACPI: Dynamic OEM Table Load: Oct 2 23:58:07.581625 kernel: ACPI: SSDT 0xFFFF9C1D41AE6800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Oct 2 23:58:07.581630 kernel: ACPI: Interpreter enabled Oct 2 23:58:07.581635 kernel: ACPI: PM: (supports S0 S5) Oct 2 23:58:07.581640 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 23:58:07.581645 kernel: HEST: Enabling Firmware First mode for corrected errors. Oct 2 23:58:07.581651 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Oct 2 23:58:07.581656 kernel: HEST: Table parsing has been initialized. Oct 2 23:58:07.581662 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Oct 2 23:58:07.581667 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 23:58:07.581672 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Oct 2 23:58:07.581677 kernel: ACPI: PM: Power Resource [USBC] Oct 2 23:58:07.581682 kernel: ACPI: PM: Power Resource [V0PR] Oct 2 23:58:07.581687 kernel: ACPI: PM: Power Resource [V1PR] Oct 2 23:58:07.581692 kernel: ACPI: PM: Power Resource [V2PR] Oct 2 23:58:07.581697 kernel: ACPI: PM: Power Resource [WRST] Oct 2 23:58:07.581702 kernel: ACPI: PM: Power Resource [FN00] Oct 2 23:58:07.581708 kernel: ACPI: PM: Power Resource [FN01] Oct 2 23:58:07.581713 kernel: ACPI: PM: Power Resource [FN02] Oct 2 23:58:07.581718 kernel: ACPI: PM: Power Resource [FN03] Oct 2 23:58:07.581723 kernel: ACPI: PM: Power Resource [FN04] Oct 2 23:58:07.581728 kernel: ACPI: PM: Power Resource [PIN] Oct 2 23:58:07.581734 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Oct 2 23:58:07.581797 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 23:58:07.581841 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Oct 2 23:58:07.581885 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Oct 2 23:58:07.581892 kernel: PCI host bridge to bus 0000:00 Oct 2 23:58:07.581935 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 23:58:07.581972 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 23:58:07.582009 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 23:58:07.582045 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Oct 2 23:58:07.582081 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Oct 2 23:58:07.582118 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Oct 2 23:58:07.582169 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Oct 2 23:58:07.582218 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Oct 2 23:58:07.582260 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.582307 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Oct 2 23:58:07.582348 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Oct 2 23:58:07.582398 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Oct 2 23:58:07.582440 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Oct 2 23:58:07.582488 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Oct 2 23:58:07.582530 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Oct 2 23:58:07.582571 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Oct 2 23:58:07.582615 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Oct 2 23:58:07.582658 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Oct 2 23:58:07.582699 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Oct 2 23:58:07.582743 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Oct 2 23:58:07.582785 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Oct 2 23:58:07.582832 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Oct 2 23:58:07.582874 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Oct 2 23:58:07.582920 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Oct 2 23:58:07.582961 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Oct 2 23:58:07.583002 kernel: pci 0000:00:16.0: PME# supported from D3hot Oct 2 23:58:07.583045 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Oct 2 23:58:07.583087 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Oct 2 23:58:07.583127 kernel: pci 0000:00:16.1: PME# supported from D3hot Oct 2 23:58:07.583171 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Oct 2 23:58:07.583213 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Oct 2 23:58:07.583253 kernel: pci 0000:00:16.4: PME# supported from D3hot Oct 2 23:58:07.583297 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Oct 2 23:58:07.583339 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Oct 2 23:58:07.583382 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Oct 2 23:58:07.583462 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Oct 2 23:58:07.583503 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Oct 2 23:58:07.583550 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Oct 2 23:58:07.583592 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Oct 2 23:58:07.583632 kernel: pci 0000:00:17.0: PME# supported from D3hot Oct 2 23:58:07.583676 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Oct 2 23:58:07.583719 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.583763 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Oct 2 23:58:07.583805 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.583854 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Oct 2 23:58:07.583896 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.583941 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Oct 2 23:58:07.583983 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.584029 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Oct 2 23:58:07.584071 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.584118 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Oct 2 23:58:07.584159 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Oct 2 23:58:07.584206 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Oct 2 23:58:07.584253 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Oct 2 23:58:07.584296 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Oct 2 23:58:07.584338 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Oct 2 23:58:07.584386 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Oct 2 23:58:07.584429 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Oct 2 23:58:07.584475 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Oct 2 23:58:07.584522 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Oct 2 23:58:07.584564 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Oct 2 23:58:07.584607 kernel: pci 0000:01:00.0: PME# supported from D3cold Oct 2 23:58:07.584649 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Oct 2 23:58:07.584692 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Oct 2 23:58:07.584738 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Oct 2 23:58:07.584781 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Oct 2 23:58:07.584825 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Oct 2 23:58:07.584868 kernel: pci 0000:01:00.1: PME# supported from D3cold Oct 2 23:58:07.584911 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Oct 2 23:58:07.584953 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Oct 2 23:58:07.584997 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 23:58:07.585039 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Oct 2 23:58:07.585081 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Oct 2 23:58:07.585121 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Oct 2 23:58:07.585170 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Oct 2 23:58:07.585212 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Oct 2 23:58:07.585256 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Oct 2 23:58:07.585298 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Oct 2 23:58:07.585341 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.585384 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Oct 2 23:58:07.585427 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Oct 2 23:58:07.585472 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Oct 2 23:58:07.585519 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Oct 2 23:58:07.585563 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Oct 2 23:58:07.585605 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Oct 2 23:58:07.585648 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Oct 2 23:58:07.585690 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Oct 2 23:58:07.585732 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Oct 2 23:58:07.585773 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Oct 2 23:58:07.585816 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Oct 2 23:58:07.585858 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Oct 2 23:58:07.585904 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Oct 2 23:58:07.585948 kernel: pci 0000:06:00.0: enabling Extended Tags Oct 2 23:58:07.585991 kernel: pci 0000:06:00.0: supports D1 D2 Oct 2 23:58:07.586035 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 23:58:07.586076 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Oct 2 23:58:07.586119 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Oct 2 23:58:07.586162 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Oct 2 23:58:07.586244 kernel: pci_bus 0000:07: extended config space not accessible Oct 2 23:58:07.586314 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Oct 2 23:58:07.586361 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Oct 2 23:58:07.586408 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Oct 2 23:58:07.586453 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Oct 2 23:58:07.586497 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 23:58:07.586544 kernel: pci 0000:07:00.0: supports D1 D2 Oct 2 23:58:07.586590 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 23:58:07.586634 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Oct 2 23:58:07.586676 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Oct 2 23:58:07.586719 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Oct 2 23:58:07.586727 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Oct 2 23:58:07.586733 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Oct 2 23:58:07.586739 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Oct 2 23:58:07.586745 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Oct 2 23:58:07.586751 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Oct 2 23:58:07.586756 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Oct 2 23:58:07.586762 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Oct 2 23:58:07.586767 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Oct 2 23:58:07.586773 kernel: iommu: Default domain type: Translated Oct 2 23:58:07.586778 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 23:58:07.586822 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Oct 2 23:58:07.586868 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 23:58:07.586913 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Oct 2 23:58:07.586920 kernel: vgaarb: loaded Oct 2 23:58:07.586926 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 23:58:07.586932 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 23:58:07.586937 kernel: PTP clock support registered Oct 2 23:58:07.586943 kernel: PCI: Using ACPI for IRQ routing Oct 2 23:58:07.586949 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 23:58:07.586954 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Oct 2 23:58:07.586961 kernel: e820: reserve RAM buffer [mem 0x820e2000-0x83ffffff] Oct 2 23:58:07.586966 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Oct 2 23:58:07.586971 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Oct 2 23:58:07.586977 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Oct 2 23:58:07.586982 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Oct 2 23:58:07.586987 kernel: clocksource: Switched to clocksource tsc-early Oct 2 23:58:07.586993 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 23:58:07.586998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 23:58:07.587004 kernel: pnp: PnP ACPI init Oct 2 23:58:07.587047 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Oct 2 23:58:07.587089 kernel: pnp 00:02: [dma 0 disabled] Oct 2 23:58:07.587129 kernel: pnp 00:03: [dma 0 disabled] Oct 2 23:58:07.587172 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Oct 2 23:58:07.587210 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Oct 2 23:58:07.587251 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Oct 2 23:58:07.587292 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Oct 2 23:58:07.587330 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Oct 2 23:58:07.587368 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Oct 2 23:58:07.587451 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Oct 2 23:58:07.587490 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Oct 2 23:58:07.587526 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Oct 2 23:58:07.587564 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Oct 2 23:58:07.587602 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Oct 2 23:58:07.587641 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Oct 2 23:58:07.587678 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Oct 2 23:58:07.587715 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Oct 2 23:58:07.587752 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Oct 2 23:58:07.587788 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Oct 2 23:58:07.587826 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Oct 2 23:58:07.587864 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Oct 2 23:58:07.587905 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Oct 2 23:58:07.587913 kernel: pnp: PnP ACPI: found 10 devices Oct 2 23:58:07.587919 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 23:58:07.587924 kernel: NET: Registered PF_INET protocol family Oct 2 23:58:07.587930 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 23:58:07.587936 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 23:58:07.587941 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 23:58:07.587948 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 23:58:07.587954 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 2 23:58:07.587960 kernel: TCP: Hash tables configured (established 262144 bind 65536) Oct 2 23:58:07.587965 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 2 23:58:07.587971 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 2 23:58:07.587977 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 23:58:07.587982 kernel: NET: Registered PF_XDP protocol family Oct 2 23:58:07.588024 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Oct 2 23:58:07.588068 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Oct 2 23:58:07.588109 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Oct 2 23:58:07.588152 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Oct 2 23:58:07.588196 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Oct 2 23:58:07.588239 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Oct 2 23:58:07.588282 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Oct 2 23:58:07.588323 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 23:58:07.588365 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Oct 2 23:58:07.588459 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Oct 2 23:58:07.588502 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Oct 2 23:58:07.588543 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Oct 2 23:58:07.588584 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Oct 2 23:58:07.588626 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Oct 2 23:58:07.588670 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Oct 2 23:58:07.588711 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Oct 2 23:58:07.588752 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Oct 2 23:58:07.588794 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Oct 2 23:58:07.588837 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Oct 2 23:58:07.588880 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Oct 2 23:58:07.588922 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Oct 2 23:58:07.588963 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Oct 2 23:58:07.589005 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Oct 2 23:58:07.589048 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Oct 2 23:58:07.589086 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Oct 2 23:58:07.589122 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 23:58:07.589159 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 23:58:07.589195 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 23:58:07.589230 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Oct 2 23:58:07.589267 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Oct 2 23:58:07.589310 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Oct 2 23:58:07.589351 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Oct 2 23:58:07.589425 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Oct 2 23:58:07.589485 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Oct 2 23:58:07.589527 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 2 23:58:07.589565 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Oct 2 23:58:07.589607 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Oct 2 23:58:07.589646 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Oct 2 23:58:07.589687 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Oct 2 23:58:07.589727 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Oct 2 23:58:07.589735 kernel: PCI: CLS 64 bytes, default 64 Oct 2 23:58:07.589741 kernel: DMAR: No ATSR found Oct 2 23:58:07.589746 kernel: DMAR: No SATC found Oct 2 23:58:07.589752 kernel: DMAR: dmar0: Using Queued invalidation Oct 2 23:58:07.589794 kernel: pci 0000:00:00.0: Adding to iommu group 0 Oct 2 23:58:07.589838 kernel: pci 0000:00:01.0: Adding to iommu group 1 Oct 2 23:58:07.589881 kernel: pci 0000:00:08.0: Adding to iommu group 2 Oct 2 23:58:07.589922 kernel: pci 0000:00:12.0: Adding to iommu group 3 Oct 2 23:58:07.589964 kernel: pci 0000:00:14.0: Adding to iommu group 4 Oct 2 23:58:07.590005 kernel: pci 0000:00:14.2: Adding to iommu group 4 Oct 2 23:58:07.590047 kernel: pci 0000:00:15.0: Adding to iommu group 5 Oct 2 23:58:07.590088 kernel: pci 0000:00:15.1: Adding to iommu group 5 Oct 2 23:58:07.590129 kernel: pci 0000:00:16.0: Adding to iommu group 6 Oct 2 23:58:07.590173 kernel: pci 0000:00:16.1: Adding to iommu group 6 Oct 2 23:58:07.590214 kernel: pci 0000:00:16.4: Adding to iommu group 6 Oct 2 23:58:07.590254 kernel: pci 0000:00:17.0: Adding to iommu group 7 Oct 2 23:58:07.590297 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Oct 2 23:58:07.590337 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Oct 2 23:58:07.590382 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Oct 2 23:58:07.590471 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Oct 2 23:58:07.590513 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Oct 2 23:58:07.590556 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Oct 2 23:58:07.590598 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Oct 2 23:58:07.590640 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Oct 2 23:58:07.590681 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Oct 2 23:58:07.590725 kernel: pci 0000:01:00.0: Adding to iommu group 1 Oct 2 23:58:07.590767 kernel: pci 0000:01:00.1: Adding to iommu group 1 Oct 2 23:58:07.590810 kernel: pci 0000:03:00.0: Adding to iommu group 15 Oct 2 23:58:07.590852 kernel: pci 0000:04:00.0: Adding to iommu group 16 Oct 2 23:58:07.590898 kernel: pci 0000:06:00.0: Adding to iommu group 17 Oct 2 23:58:07.590943 kernel: pci 0000:07:00.0: Adding to iommu group 17 Oct 2 23:58:07.590950 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Oct 2 23:58:07.590956 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 2 23:58:07.590962 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Oct 2 23:58:07.590967 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Oct 2 23:58:07.590973 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Oct 2 23:58:07.590978 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Oct 2 23:58:07.590985 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Oct 2 23:58:07.591031 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Oct 2 23:58:07.591039 kernel: Initialise system trusted keyrings Oct 2 23:58:07.591044 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Oct 2 23:58:07.591050 kernel: Key type asymmetric registered Oct 2 23:58:07.591055 kernel: Asymmetric key parser 'x509' registered Oct 2 23:58:07.591061 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 23:58:07.591066 kernel: io scheduler mq-deadline registered Oct 2 23:58:07.591073 kernel: io scheduler kyber registered Oct 2 23:58:07.591079 kernel: io scheduler bfq registered Oct 2 23:58:07.591120 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Oct 2 23:58:07.591162 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Oct 2 23:58:07.591204 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Oct 2 23:58:07.591245 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Oct 2 23:58:07.591288 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Oct 2 23:58:07.591328 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Oct 2 23:58:07.591379 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Oct 2 23:58:07.591415 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Oct 2 23:58:07.591420 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Oct 2 23:58:07.591426 kernel: pstore: Registered erst as persistent store backend Oct 2 23:58:07.591432 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 23:58:07.591457 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 23:58:07.591463 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 23:58:07.591468 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 2 23:58:07.591475 kernel: hpet_acpi_add: no address or irqs in _CRS Oct 2 23:58:07.591519 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Oct 2 23:58:07.591527 kernel: i8042: PNP: No PS/2 controller found. Oct 2 23:58:07.591565 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Oct 2 23:58:07.591604 kernel: rtc_cmos rtc_cmos: registered as rtc0 Oct 2 23:58:07.591641 kernel: rtc_cmos rtc_cmos: setting system clock to 2023-10-02T23:58:06 UTC (1696291086) Oct 2 23:58:07.591678 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Oct 2 23:58:07.591686 kernel: fail to initialize ptp_kvm Oct 2 23:58:07.591693 kernel: intel_pstate: Intel P-state driver initializing Oct 2 23:58:07.591698 kernel: intel_pstate: Disabling energy efficiency optimization Oct 2 23:58:07.591704 kernel: intel_pstate: HWP enabled Oct 2 23:58:07.591709 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Oct 2 23:58:07.591715 kernel: vesafb: scrolling: redraw Oct 2 23:58:07.591720 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Oct 2 23:58:07.591726 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000005febd274, using 768k, total 768k Oct 2 23:58:07.591732 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 23:58:07.591737 kernel: fb0: VESA VGA frame buffer device Oct 2 23:58:07.591743 kernel: NET: Registered PF_INET6 protocol family Oct 2 23:58:07.591749 kernel: Segment Routing with IPv6 Oct 2 23:58:07.591754 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 23:58:07.591760 kernel: NET: Registered PF_PACKET protocol family Oct 2 23:58:07.591766 kernel: Key type dns_resolver registered Oct 2 23:58:07.591771 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Oct 2 23:58:07.591777 kernel: microcode: Microcode Update Driver: v2.2. Oct 2 23:58:07.591782 kernel: IPI shorthand broadcast: enabled Oct 2 23:58:07.591788 kernel: sched_clock: Marking stable (1678743141, 1334713226)->(4432404275, -1418947908) Oct 2 23:58:07.591794 kernel: registered taskstats version 1 Oct 2 23:58:07.591800 kernel: Loading compiled-in X.509 certificates Oct 2 23:58:07.591805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 23:58:07.591810 kernel: Key type .fscrypt registered Oct 2 23:58:07.591816 kernel: Key type fscrypt-provisioning registered Oct 2 23:58:07.591821 kernel: pstore: Using crash dump compression: deflate Oct 2 23:58:07.591827 kernel: ima: Allocated hash algorithm: sha1 Oct 2 23:58:07.591832 kernel: ima: No architecture policies found Oct 2 23:58:07.591838 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 23:58:07.591844 kernel: Write protecting the kernel read-only data: 28672k Oct 2 23:58:07.591850 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 23:58:07.591856 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 23:58:07.591861 kernel: Run /init as init process Oct 2 23:58:07.591867 kernel: with arguments: Oct 2 23:58:07.591873 kernel: /init Oct 2 23:58:07.591878 kernel: with environment: Oct 2 23:58:07.591883 kernel: HOME=/ Oct 2 23:58:07.591889 kernel: TERM=linux Oct 2 23:58:07.591895 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 23:58:07.591901 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 23:58:07.591908 systemd[1]: Detected architecture x86-64. Oct 2 23:58:07.591914 systemd[1]: Running in initrd. Oct 2 23:58:07.591920 systemd[1]: No hostname configured, using default hostname. Oct 2 23:58:07.591925 systemd[1]: Hostname set to . Oct 2 23:58:07.591931 systemd[1]: Initializing machine ID from random generator. Oct 2 23:58:07.591938 systemd[1]: Queued start job for default target initrd.target. Oct 2 23:58:07.591943 systemd[1]: Started systemd-ask-password-console.path. Oct 2 23:58:07.591949 systemd[1]: Reached target cryptsetup.target. Oct 2 23:58:07.591955 systemd[1]: Reached target ignition-diskful-subsequent.target. Oct 2 23:58:07.591960 systemd[1]: Reached target paths.target. Oct 2 23:58:07.591966 systemd[1]: Reached target slices.target. Oct 2 23:58:07.591972 systemd[1]: Reached target swap.target. Oct 2 23:58:07.591977 systemd[1]: Reached target timers.target. Oct 2 23:58:07.591984 systemd[1]: Listening on iscsid.socket. Oct 2 23:58:07.591990 systemd[1]: Listening on iscsiuio.socket. Oct 2 23:58:07.591996 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 23:58:07.592002 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 23:58:07.592008 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Oct 2 23:58:07.592013 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Oct 2 23:58:07.592019 systemd[1]: Listening on systemd-journald.socket. Oct 2 23:58:07.592025 kernel: clocksource: Switched to clocksource tsc Oct 2 23:58:07.592032 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 23:58:07.592037 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 23:58:07.592043 systemd[1]: Reached target sockets.target. Oct 2 23:58:07.592049 systemd[1]: Starting iscsiuio.service... Oct 2 23:58:07.592055 systemd[1]: Starting kmod-static-nodes.service... Oct 2 23:58:07.592060 kernel: SCSI subsystem initialized Oct 2 23:58:07.592066 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 23:58:07.592072 kernel: Loading iSCSI transport class v2.0-870. Oct 2 23:58:07.592077 systemd[1]: Starting systemd-journald.service... Oct 2 23:58:07.592084 systemd[1]: Starting systemd-modules-load.service... Oct 2 23:58:07.592092 systemd-journald[266]: Journal started Oct 2 23:58:07.592118 systemd-journald[266]: Runtime Journal (/run/log/journal/1f5fd3ac1a144366878a301c3112574d) is 8.0M, max 640.1M, 632.1M free. Oct 2 23:58:07.595276 systemd-modules-load[267]: Inserted module 'overlay' Oct 2 23:58:07.618962 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 23:58:07.652373 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 23:58:07.652388 systemd[1]: Started iscsiuio.service. Oct 2 23:58:07.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.678426 kernel: Bridge firewalling registered Oct 2 23:58:07.678440 kernel: audit: type=1130 audit(1696291087.677:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.678448 systemd[1]: Started systemd-journald.service. Oct 2 23:58:07.738049 systemd-modules-load[267]: Inserted module 'br_netfilter' Oct 2 23:58:07.781640 kernel: audit: type=1130 audit(1696291087.737:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.738296 systemd[1]: Finished kmod-static-nodes.service. Oct 2 23:58:07.892466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 23:58:07.892478 kernel: audit: type=1130 audit(1696291087.801:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.892486 kernel: device-mapper: uevent: version 1.0.3 Oct 2 23:58:07.892492 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 23:58:07.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.801525 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 23:58:07.942600 kernel: audit: type=1130 audit(1696291087.897:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.897477 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 23:58:07.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.996488 kernel: audit: type=1130 audit(1696291087.950:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.942969 systemd-modules-load[267]: Inserted module 'dm_multipath' Oct 2 23:58:08.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.051423 kernel: audit: type=1130 audit(1696291088.004:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:07.950690 systemd[1]: Finished systemd-modules-load.service. Oct 2 23:58:08.004971 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 23:58:08.051628 systemd[1]: Starting systemd-sysctl.service... Oct 2 23:58:08.051920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 23:58:08.054684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 23:58:08.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.055157 systemd[1]: Finished systemd-sysctl.service. Oct 2 23:58:08.104459 kernel: audit: type=1130 audit(1696291088.054:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.117722 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 23:58:08.217862 kernel: audit: type=1130 audit(1696291088.117:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.217879 kernel: audit: type=1130 audit(1696291088.165:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.165862 systemd[1]: Starting dracut-cmdline.service... Oct 2 23:58:08.249476 kernel: iscsi: registered transport (tcp) Oct 2 23:58:08.249487 dracut-cmdline[287]: dracut-dracut-053 Oct 2 23:58:08.249487 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Oct 2 23:58:08.249487 dracut-cmdline[287]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 23:58:08.321712 kernel: iscsi: registered transport (qla4xxx) Oct 2 23:58:08.321724 kernel: QLogic iSCSI HBA Driver Oct 2 23:58:08.310113 systemd[1]: Finished dracut-cmdline.service. Oct 2 23:58:08.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.348273 systemd[1]: Starting dracut-pre-udev.service... Oct 2 23:58:08.361987 systemd[1]: Starting iscsid.service... Oct 2 23:58:08.375682 systemd[1]: Started iscsid.service. Oct 2 23:58:08.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.396657 iscsid[441]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 23:58:08.396657 iscsid[441]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 2 23:58:08.396657 iscsid[441]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 23:58:08.396657 iscsid[441]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 23:58:08.396657 iscsid[441]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 23:58:08.396657 iscsid[441]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 23:58:08.396657 iscsid[441]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 23:58:08.554474 kernel: raid6: avx2x4 gen() 20479 MB/s Oct 2 23:58:08.554488 kernel: raid6: avx2x4 xor() 20872 MB/s Oct 2 23:58:08.554496 kernel: raid6: avx2x2 gen() 54065 MB/s Oct 2 23:58:08.554502 kernel: raid6: avx2x2 xor() 32250 MB/s Oct 2 23:58:08.554509 kernel: raid6: avx2x1 gen() 45166 MB/s Oct 2 23:58:08.597406 kernel: raid6: avx2x1 xor() 27814 MB/s Oct 2 23:58:08.632402 kernel: raid6: sse2x4 gen() 21379 MB/s Oct 2 23:58:08.667402 kernel: raid6: sse2x4 xor() 11994 MB/s Oct 2 23:58:08.702436 kernel: raid6: sse2x2 gen() 21684 MB/s Oct 2 23:58:08.737437 kernel: raid6: sse2x2 xor() 13469 MB/s Oct 2 23:58:08.770406 kernel: raid6: sse2x1 gen() 18307 MB/s Oct 2 23:58:08.823211 kernel: raid6: sse2x1 xor() 8934 MB/s Oct 2 23:58:08.823227 kernel: raid6: using algorithm avx2x2 gen() 54065 MB/s Oct 2 23:58:08.823234 kernel: raid6: .... xor() 32250 MB/s, rmw enabled Oct 2 23:58:08.841688 kernel: raid6: using avx2x2 recovery algorithm Oct 2 23:58:08.888402 kernel: xor: automatically using best checksumming function avx Oct 2 23:58:08.967402 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 23:58:08.972204 systemd[1]: Finished dracut-pre-udev.service. Oct 2 23:58:08.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.981000 audit: BPF prog-id=6 op=LOAD Oct 2 23:58:08.981000 audit: BPF prog-id=7 op=LOAD Oct 2 23:58:08.982380 systemd[1]: Starting systemd-udevd.service... Oct 2 23:58:08.990975 systemd-udevd[466]: Using default interface naming scheme 'v252'. Oct 2 23:58:09.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:08.996673 systemd[1]: Started systemd-udevd.service. Oct 2 23:58:09.035486 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Oct 2 23:58:09.013343 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 23:58:09.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:09.039521 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 23:58:09.052116 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 23:58:09.099196 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 23:58:09.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:09.099724 systemd[1]: Starting dracut-initqueue.service... Oct 2 23:58:09.128375 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 23:58:09.148377 kernel: ACPI: bus type USB registered Oct 2 23:58:09.148409 kernel: libata version 3.00 loaded. Oct 2 23:58:09.148417 kernel: usbcore: registered new interface driver usbfs Oct 2 23:58:09.185096 kernel: usbcore: registered new interface driver hub Oct 2 23:58:09.185127 kernel: usbcore: registered new device driver usb Oct 2 23:58:09.220382 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 23:58:09.220425 kernel: AES CTR mode by8 optimization enabled Oct 2 23:58:09.273376 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Oct 2 23:58:09.273400 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Oct 2 23:58:09.312023 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Oct 2 23:58:09.312119 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Oct 2 23:58:09.313424 kernel: ahci 0000:00:17.0: version 3.0 Oct 2 23:58:09.329423 kernel: pps pps0: new PPS source ptp0 Oct 2 23:58:09.329498 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Oct 2 23:58:09.329553 kernel: igb 0000:03:00.0: added PHC on eth0 Oct 2 23:58:09.329607 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Oct 2 23:58:09.329656 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Oct 2 23:58:09.330424 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Oct 2 23:58:09.331424 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Oct 2 23:58:09.331493 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Oct 2 23:58:09.331554 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Oct 2 23:58:09.331603 kernel: hub 1-0:1.0: USB hub found Oct 2 23:58:09.331664 kernel: hub 1-0:1.0: 16 ports detected Oct 2 23:58:09.331716 kernel: hub 2-0:1.0: USB hub found Oct 2 23:58:09.331791 kernel: hub 2-0:1.0: 10 ports detected Oct 2 23:58:09.332417 kernel: usb: port power management may be unreliable Oct 2 23:58:09.348259 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Oct 2 23:58:09.379590 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Oct 2 23:58:09.432422 kernel: scsi host0: ahci Oct 2 23:58:09.432496 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:56 Oct 2 23:58:09.432551 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Oct 2 23:58:09.449623 kernel: scsi host1: ahci Oct 2 23:58:09.449649 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Oct 2 23:58:09.504043 kernel: pps pps1: new PPS source ptp1 Oct 2 23:58:09.504113 kernel: scsi host2: ahci Oct 2 23:58:09.504127 kernel: igb 0000:04:00.0: added PHC on eth1 Oct 2 23:58:09.529188 kernel: scsi host3: ahci Oct 2 23:58:09.545987 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Oct 2 23:58:09.560336 kernel: scsi host4: ahci Oct 2 23:58:09.560358 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:57 Oct 2 23:58:09.572372 kernel: scsi host5: ahci Oct 2 23:58:09.572395 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Oct 2 23:58:09.582443 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Oct 2 23:58:09.588413 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Oct 2 23:58:09.588486 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Oct 2 23:58:09.608064 kernel: scsi host6: ahci Oct 2 23:58:09.608090 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Oct 2 23:58:09.623419 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Oct 2 23:58:09.669665 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Oct 2 23:58:09.669736 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Oct 2 23:58:09.669744 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Oct 2 23:58:09.669752 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Oct 2 23:58:09.723849 kernel: hub 1-14:1.0: USB hub found Oct 2 23:58:09.723927 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Oct 2 23:58:09.742339 kernel: hub 1-14:1.0: 4 ports detected Oct 2 23:58:09.742419 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Oct 2 23:58:09.742428 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Oct 2 23:58:09.791372 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Oct 2 23:58:10.029579 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Oct 2 23:58:10.070456 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Oct 2 23:58:10.070533 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Oct 2 23:58:10.070551 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Oct 2 23:58:10.113381 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 2 23:58:10.113410 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Oct 2 23:58:10.130403 kernel: ata7: SATA link down (SStatus 0 SControl 300) Oct 2 23:58:10.146409 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 2 23:58:10.161401 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 2 23:58:10.177402 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 2 23:58:10.177417 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 23:58:10.192418 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Oct 2 23:58:10.223430 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Oct 2 23:58:10.241430 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Oct 2 23:58:10.291906 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Oct 2 23:58:10.291921 kernel: ata1.00: Features: NCQ-prio Oct 2 23:58:10.291929 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Oct 2 23:58:10.322362 kernel: ata2.00: Features: NCQ-prio Oct 2 23:58:10.341429 kernel: ata1.00: configured for UDMA/133 Oct 2 23:58:10.341446 kernel: ata2.00: configured for UDMA/133 Oct 2 23:58:10.341453 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Oct 2 23:58:10.372371 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Oct 2 23:58:10.372444 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Oct 2 23:58:10.409372 kernel: port_module: 9 callbacks suppressed Oct 2 23:58:10.409391 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Oct 2 23:58:10.462372 kernel: usbcore: registered new interface driver usbhid Oct 2 23:58:10.462417 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Oct 2 23:58:10.462508 kernel: usbhid: USB HID core driver Oct 2 23:58:10.530372 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Oct 2 23:58:10.530387 kernel: ata2.00: Enabling discard_zeroes_data Oct 2 23:58:10.545584 kernel: ata1.00: Enabling discard_zeroes_data Oct 2 23:58:10.560569 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Oct 2 23:58:10.560647 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Oct 2 23:58:10.596138 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Oct 2 23:58:10.596231 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 2 23:58:10.596296 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 2 23:58:10.596363 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Oct 2 23:58:10.596436 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Oct 2 23:58:10.596444 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Oct 2 23:58:10.611346 kernel: sd 1:0:0:0: [sdb] Write Protect is off Oct 2 23:58:10.626356 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Oct 2 23:58:10.640722 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Oct 2 23:58:10.672162 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 2 23:58:10.676375 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Oct 2 23:58:10.706241 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 2 23:58:10.833914 kernel: ata1.00: Enabling discard_zeroes_data Oct 2 23:58:10.850066 kernel: ata2.00: Enabling discard_zeroes_data Oct 2 23:58:10.850123 kernel: ata1.00: Enabling discard_zeroes_data Oct 2 23:58:10.865861 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 2 23:58:10.881372 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Oct 2 23:58:10.912541 kernel: ata2.00: Enabling discard_zeroes_data Oct 2 23:58:10.912556 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Oct 2 23:58:10.947415 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Oct 2 23:58:10.963967 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 23:58:11.022649 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Oct 2 23:58:11.022728 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sdb6 scanned by (udev-worker) (513) Oct 2 23:58:11.001492 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 23:58:11.006605 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 23:58:11.033716 systemd[1]: Finished dracut-initqueue.service. Oct 2 23:58:11.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.062749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 23:58:11.118554 kernel: audit: type=1130 audit(1696291091.058:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.111942 systemd[1]: Reached target initrd-root-device.target. Oct 2 23:58:11.118613 systemd[1]: Reached target remote-fs-pre.target. Oct 2 23:58:11.142500 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 23:58:11.159594 systemd[1]: Reached target remote-fs.target. Oct 2 23:58:11.178231 systemd[1]: Starting disk-uuid.service... Oct 2 23:58:11.192948 systemd[1]: Starting dracut-pre-mount.service... Oct 2 23:58:11.206994 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 23:58:11.311349 kernel: audit: type=1130 audit(1696291091.222:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.311365 kernel: audit: type=1131 audit(1696291091.222:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.207173 systemd[1]: Finished disk-uuid.service. Oct 2 23:58:11.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.366373 kernel: audit: type=1130 audit(1696291091.319:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.223097 systemd[1]: Finished dracut-pre-mount.service. Oct 2 23:58:11.319662 systemd[1]: Reached target local-fs-pre.target. Oct 2 23:58:11.374608 systemd[1]: Reached target local-fs.target. Oct 2 23:58:11.374641 systemd[1]: Reached target sysinit.target. Oct 2 23:58:11.398483 systemd[1]: Reached target basic.target. Oct 2 23:58:11.412134 systemd[1]: Starting systemd-fsck-root.service... Oct 2 23:58:11.420072 systemd[1]: Starting verity-setup.service... Oct 2 23:58:11.430932 systemd-fsck[703]: ROOT: clean, 631/553520 files, 110549/553472 blocks Oct 2 23:58:11.455373 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 23:58:11.469637 systemd[1]: Finished systemd-fsck-root.service. Oct 2 23:58:11.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.529387 kernel: audit: type=1130 audit(1696291091.477:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.478996 systemd[1]: Mounting sysroot.mount... Oct 2 23:58:11.537854 systemd[1]: Found device dev-mapper-usr.device. Oct 2 23:58:11.552581 systemd[1]: Finished verity-setup.service. Oct 2 23:58:11.669622 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 23:58:11.669639 kernel: audit: type=1130 audit(1696291091.574:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.669648 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 23:58:11.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.575144 systemd[1]: Mounting sysusr-usr.mount... Oct 2 23:58:11.677081 systemd[1]: Mounted sysroot.mount. Oct 2 23:58:11.690642 systemd[1]: Mounted sysusr-usr.mount. Oct 2 23:58:11.710595 systemd[1]: Reached target initrd-root-fs.target. Oct 2 23:58:11.719321 systemd[1]: Mounting sysroot-usr.mount... Oct 2 23:58:11.735645 systemd[1]: Mounted sysroot-usr.mount. Oct 2 23:58:11.754000 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 23:58:11.764981 systemd[1]: Starting initrd-setup-root.service... Oct 2 23:58:11.872659 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Oct 2 23:58:11.872673 kernel: BTRFS info (device sdb6): using free space tree Oct 2 23:58:11.872681 kernel: BTRFS info (device sdb6): has skinny extents Oct 2 23:58:11.872687 kernel: BTRFS info (device sdb6): enabling ssd optimizations Oct 2 23:58:11.862974 systemd[1]: Finished initrd-setup-root.service. Oct 2 23:58:11.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.882691 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 23:58:11.947644 kernel: audit: type=1130 audit(1696291091.881:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.939179 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 23:58:11.956705 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 23:58:11.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.029220 initrd-setup-root-after-ignition[795]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 23:58:12.051632 kernel: audit: type=1130 audit(1696291091.977:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:11.977719 systemd[1]: Reached target ignition-subsequent.target. Oct 2 23:58:12.038028 systemd[1]: Starting initrd-parse-etc.service... Oct 2 23:58:12.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.064181 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 23:58:12.150630 kernel: audit: type=1130 audit(1696291092.075:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.064229 systemd[1]: Finished initrd-parse-etc.service. Oct 2 23:58:12.075615 systemd[1]: Reached target initrd-fs.target. Oct 2 23:58:12.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.135596 systemd[1]: Reached target initrd.target. Oct 2 23:58:12.135654 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 23:58:12.136015 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 23:58:12.157729 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 23:58:12.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.174024 systemd[1]: Starting initrd-cleanup.service... Oct 2 23:58:12.192559 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 23:58:12.204684 systemd[1]: Stopped target timers.target. Oct 2 23:58:12.223982 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 23:58:12.224290 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 23:58:12.241196 systemd[1]: Stopped target initrd.target. Oct 2 23:58:12.254828 systemd[1]: Stopped target basic.target. Oct 2 23:58:12.269037 systemd[1]: Stopped target ignition-subsequent.target. Oct 2 23:58:12.286920 systemd[1]: Stopped target ignition-diskful-subsequent.target. Oct 2 23:58:12.303926 systemd[1]: Stopped target initrd-root-device.target. Oct 2 23:58:12.320922 systemd[1]: Stopped target paths.target. Oct 2 23:58:12.335037 systemd[1]: Stopped target remote-fs.target. Oct 2 23:58:12.349918 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 23:58:12.364924 systemd[1]: Stopped target slices.target. Oct 2 23:58:12.380034 systemd[1]: Stopped target sockets.target. Oct 2 23:58:12.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.396923 systemd[1]: Stopped target sysinit.target. Oct 2 23:58:12.412939 systemd[1]: Stopped target local-fs.target. Oct 2 23:58:12.427922 systemd[1]: Stopped target local-fs-pre.target. Oct 2 23:58:12.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.442920 systemd[1]: Stopped target swap.target. Oct 2 23:58:12.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.458971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 23:58:12.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.559671 iscsid[441]: iscsid shutting down. Oct 2 23:58:12.459309 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 23:58:12.474127 systemd[1]: Stopped target cryptsetup.target. Oct 2 23:58:12.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.488812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 23:58:12.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.494620 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 23:58:12.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.503796 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 23:58:12.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.504127 systemd[1]: Stopped dracut-initqueue.service. Oct 2 23:58:12.519054 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 23:58:12.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.519407 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 23:58:12.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.536011 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 23:58:12.536325 systemd[1]: Stopped initrd-setup-root.service. Oct 2 23:58:12.551351 systemd[1]: Stopping iscsid.service... Oct 2 23:58:12.566561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 23:58:12.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.566637 systemd[1]: Stopped systemd-sysctl.service. Oct 2 23:58:12.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.587748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 23:58:12.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.587848 systemd[1]: Stopped systemd-modules-load.service. Oct 2 23:58:12.603746 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 23:58:12.603885 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 23:58:12.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.621987 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 23:58:12.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.622275 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 23:58:12.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.638400 systemd[1]: Stopping systemd-udevd.service... Oct 2 23:58:12.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.653962 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 23:58:12.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.654379 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 23:58:12.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:12.654426 systemd[1]: Stopped iscsid.service. Oct 2 23:58:12.667835 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 23:58:12.667912 systemd[1]: Stopped systemd-udevd.service. Oct 2 23:58:12.687868 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 23:58:12.687933 systemd[1]: Closed iscsid.socket. Oct 2 23:58:12.702633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 23:58:12.702707 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 23:58:12.720719 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 23:58:12.720819 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 23:58:12.735666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 23:58:12.735805 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 23:58:12.750787 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 23:58:12.750928 systemd[1]: Stopped dracut-cmdline.service. Oct 2 23:58:12.767780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 23:58:12.767915 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 23:58:12.784428 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 23:58:12.801586 systemd[1]: Stopping iscsiuio.service... Oct 2 23:58:12.813536 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 23:58:12.813571 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 23:58:12.831813 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 23:58:12.831874 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 23:58:12.847700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 23:58:12.847799 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 23:58:12.868027 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 23:58:12.869223 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 23:58:12.869447 systemd[1]: Stopped iscsiuio.service. Oct 2 23:58:12.881204 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 23:58:12.881422 systemd[1]: Finished initrd-cleanup.service. Oct 2 23:58:12.896123 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 23:58:12.896323 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 23:58:12.916512 systemd[1]: Reached target initrd-switch-root.target. Oct 2 23:58:12.930754 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 23:58:12.930868 systemd[1]: Closed iscsiuio.socket. Oct 2 23:58:12.946497 systemd[1]: Starting initrd-switch-root.service... Oct 2 23:58:12.981021 systemd[1]: Switching root. Oct 2 23:58:13.035362 systemd-journald[266]: Journal stopped Oct 2 23:58:16.919567 systemd-journald[266]: Received SIGTERM from PID 1 (n/a). Oct 2 23:58:16.919580 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 23:58:16.919589 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 23:58:16.919595 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 23:58:16.919600 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 23:58:16.919605 kernel: SELinux: policy capability open_perms=1 Oct 2 23:58:16.919610 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 23:58:16.919616 kernel: SELinux: policy capability always_check_network=0 Oct 2 23:58:16.919621 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 23:58:16.919627 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 23:58:16.919632 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 23:58:16.919637 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 23:58:16.919642 systemd[1]: Successfully loaded SELinux policy in 286.442ms. Oct 2 23:58:16.919649 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.125ms. Oct 2 23:58:16.919657 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 23:58:16.919663 systemd[1]: Detected architecture x86-64. Oct 2 23:58:16.919669 systemd[1]: Detected first boot. Oct 2 23:58:16.919674 systemd[1]: Hostname set to . Oct 2 23:58:16.919680 systemd[1]: Initializing machine ID from random generator. Oct 2 23:58:16.919686 systemd[1]: Populated /etc with preset unit settings. Oct 2 23:58:16.919692 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 23:58:16.919700 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 23:58:16.919706 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 23:58:16.919712 kernel: kauditd_printk_skb: 32 callbacks suppressed Oct 2 23:58:16.919718 kernel: audit: type=1334 audit(1696291095.483:60): prog-id=10 op=LOAD Oct 2 23:58:16.919723 kernel: audit: type=1334 audit(1696291095.483:61): prog-id=3 op=UNLOAD Oct 2 23:58:16.919729 kernel: audit: type=1334 audit(1696291095.524:62): prog-id=11 op=LOAD Oct 2 23:58:16.919735 kernel: audit: type=1334 audit(1696291095.566:63): prog-id=12 op=LOAD Oct 2 23:58:16.919740 kernel: audit: type=1334 audit(1696291095.566:64): prog-id=4 op=UNLOAD Oct 2 23:58:16.919746 kernel: audit: type=1334 audit(1696291095.566:65): prog-id=5 op=UNLOAD Oct 2 23:58:16.919751 kernel: audit: type=1334 audit(1696291095.626:66): prog-id=13 op=LOAD Oct 2 23:58:16.919757 kernel: audit: type=1334 audit(1696291095.626:67): prog-id=10 op=UNLOAD Oct 2 23:58:16.919762 kernel: audit: type=1334 audit(1696291095.665:68): prog-id=14 op=LOAD Oct 2 23:58:16.919767 kernel: audit: type=1334 audit(1696291095.684:69): prog-id=15 op=LOAD Oct 2 23:58:16.919773 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 23:58:16.919779 systemd[1]: Stopped initrd-switch-root.service. Oct 2 23:58:16.919786 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 23:58:16.919792 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 23:58:16.919798 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 23:58:16.919805 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 23:58:16.919812 systemd[1]: Created slice system-getty.slice. Oct 2 23:58:16.919819 systemd[1]: Created slice system-modprobe.slice. Oct 2 23:58:16.919825 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 23:58:16.919831 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 23:58:16.919838 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 23:58:16.919844 systemd[1]: Created slice user.slice. Oct 2 23:58:16.919850 systemd[1]: Started systemd-ask-password-console.path. Oct 2 23:58:16.919856 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 23:58:16.919863 systemd[1]: Set up automount boot.automount. Oct 2 23:58:16.919869 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 23:58:16.919875 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 23:58:16.919881 systemd[1]: Stopped target initrd-fs.target. Oct 2 23:58:16.919887 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 23:58:16.919894 systemd[1]: Reached target integritysetup.target. Oct 2 23:58:16.919900 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 23:58:16.919906 systemd[1]: Reached target remote-fs.target. Oct 2 23:58:16.919913 systemd[1]: Reached target slices.target. Oct 2 23:58:16.919919 systemd[1]: Reached target swap.target. Oct 2 23:58:16.919925 systemd[1]: Reached target torcx.target. Oct 2 23:58:16.919931 systemd[1]: Reached target veritysetup.target. Oct 2 23:58:16.919938 systemd[1]: Listening on systemd-coredump.socket. Oct 2 23:58:16.919944 systemd[1]: Listening on systemd-initctl.socket. Oct 2 23:58:16.919951 systemd[1]: Listening on systemd-networkd.socket. Oct 2 23:58:16.919957 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 23:58:16.919964 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 23:58:16.919971 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 23:58:16.919977 systemd[1]: Mounting dev-hugepages.mount... Oct 2 23:58:16.919983 systemd[1]: Mounting dev-mqueue.mount... Oct 2 23:58:16.919990 systemd[1]: Mounting media.mount... Oct 2 23:58:16.919996 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 23:58:16.920003 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 23:58:16.920009 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 23:58:16.920015 systemd[1]: Mounting tmp.mount... Oct 2 23:58:16.920022 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 23:58:16.920029 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 23:58:16.920036 systemd[1]: Starting kmod-static-nodes.service... Oct 2 23:58:16.920042 systemd[1]: Starting modprobe@configfs.service... Oct 2 23:58:16.920048 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 23:58:16.920054 systemd[1]: Starting modprobe@drm.service... Oct 2 23:58:16.920061 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 23:58:16.920067 systemd[1]: Starting modprobe@fuse.service... Oct 2 23:58:16.920073 kernel: fuse: init (API version 7.34) Oct 2 23:58:16.920079 systemd[1]: Starting modprobe@loop.service... Oct 2 23:58:16.920086 kernel: loop: module loaded Oct 2 23:58:16.920092 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 23:58:16.920099 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 23:58:16.920105 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 23:58:16.920112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 23:58:16.920118 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 23:58:16.920124 systemd[1]: Stopped systemd-journald.service. Oct 2 23:58:16.920130 systemd[1]: Starting systemd-journald.service... Oct 2 23:58:16.920137 systemd[1]: Starting systemd-modules-load.service... Oct 2 23:58:16.920146 systemd-journald[934]: Journal started Oct 2 23:58:16.920170 systemd-journald[934]: Runtime Journal (/run/log/journal/0c0e918616064da5b8e585cde05a7200) is 8.0M, max 640.1M, 632.1M free. Oct 2 23:58:13.561000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 23:58:13.819000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 23:58:13.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 23:58:13.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 23:58:13.821000 audit: BPF prog-id=8 op=LOAD Oct 2 23:58:13.821000 audit: BPF prog-id=8 op=UNLOAD Oct 2 23:58:13.821000 audit: BPF prog-id=9 op=LOAD Oct 2 23:58:13.821000 audit: BPF prog-id=9 op=UNLOAD Oct 2 23:58:15.483000 audit: BPF prog-id=10 op=LOAD Oct 2 23:58:15.483000 audit: BPF prog-id=3 op=UNLOAD Oct 2 23:58:15.524000 audit: BPF prog-id=11 op=LOAD Oct 2 23:58:15.566000 audit: BPF prog-id=12 op=LOAD Oct 2 23:58:15.566000 audit: BPF prog-id=4 op=UNLOAD Oct 2 23:58:15.566000 audit: BPF prog-id=5 op=UNLOAD Oct 2 23:58:15.626000 audit: BPF prog-id=13 op=LOAD Oct 2 23:58:15.626000 audit: BPF prog-id=10 op=UNLOAD Oct 2 23:58:15.665000 audit: BPF prog-id=14 op=LOAD Oct 2 23:58:15.684000 audit: BPF prog-id=15 op=LOAD Oct 2 23:58:15.684000 audit: BPF prog-id=11 op=UNLOAD Oct 2 23:58:15.684000 audit: BPF prog-id=12 op=UNLOAD Oct 2 23:58:15.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:15.742000 audit: BPF prog-id=13 op=UNLOAD Oct 2 23:58:15.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:15.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:16.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:16.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:16.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:16.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:16.892000 audit: BPF prog-id=16 op=LOAD Oct 2 23:58:16.893000 audit: BPF prog-id=17 op=LOAD Oct 2 23:58:16.893000 audit: BPF prog-id=18 op=LOAD Oct 2 23:58:16.893000 audit: BPF prog-id=14 op=UNLOAD Oct 2 23:58:16.893000 audit: BPF prog-id=15 op=UNLOAD Oct 2 23:58:16.917000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 23:58:16.917000 audit[934]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd030d33f0 a2=4000 a3=7ffd030d348c items=0 ppid=1 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:16.917000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 23:58:13.891325 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 23:58:15.481834 systemd[1]: Queued start job for default target multi-user.target. Oct 2 23:58:13.891735 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 23:58:15.481842 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Oct 2 23:58:13.891748 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 23:58:15.684686 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 23:58:13.891769 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 23:58:13.891775 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 23:58:13.891794 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 23:58:13.891802 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 23:58:13.892126 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 23:58:13.892152 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 23:58:13.892161 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 23:58:13.892546 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 23:58:13.892567 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 23:58:13.892579 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 23:58:13.892588 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 23:58:13.892599 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 23:58:13.892608 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 23:58:15.128463 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:15Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 23:58:15.128612 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:15Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 23:58:15.128668 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:15Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 23:58:15.128759 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:15Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 23:58:15.128790 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:15Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 23:58:15.128829 /usr/lib/systemd/system-generators/torcx-generator[827]: time="2023-10-02T23:58:15Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 23:58:16.950580 systemd[1]: Starting systemd-network-generator.service... Oct 2 23:58:16.972409 systemd[1]: Starting systemd-remount-fs.service... Oct 2 23:58:16.994409 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 23:58:17.026965 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 23:58:17.026985 systemd[1]: Stopped verity-setup.service. Oct 2 23:58:17.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.061411 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 23:58:17.076563 systemd[1]: Started systemd-journald.service. Oct 2 23:58:17.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.084014 systemd[1]: Mounted dev-hugepages.mount. Oct 2 23:58:17.091651 systemd[1]: Mounted dev-mqueue.mount. Oct 2 23:58:17.098649 systemd[1]: Mounted media.mount. Oct 2 23:58:17.105635 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 23:58:17.114639 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 23:58:17.123604 systemd[1]: Mounted tmp.mount. Oct 2 23:58:17.130699 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 23:58:17.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.139750 systemd[1]: Finished kmod-static-nodes.service. Oct 2 23:58:17.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.148759 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 23:58:17.148870 systemd[1]: Finished modprobe@configfs.service. Oct 2 23:58:17.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.157786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 23:58:17.157925 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 23:58:17.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.166875 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 23:58:17.167039 systemd[1]: Finished modprobe@drm.service. Oct 2 23:58:17.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.176061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 23:58:17.176312 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 23:58:17.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.186181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 23:58:17.186570 systemd[1]: Finished modprobe@fuse.service. Oct 2 23:58:17.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.195152 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 23:58:17.195490 systemd[1]: Finished modprobe@loop.service. Oct 2 23:58:17.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.204187 systemd[1]: Finished systemd-modules-load.service. Oct 2 23:58:17.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.213158 systemd[1]: Finished systemd-network-generator.service. Oct 2 23:58:17.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.222156 systemd[1]: Finished systemd-remount-fs.service. Oct 2 23:58:17.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.231154 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 23:58:17.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.240856 systemd[1]: Reached target network-pre.target. Oct 2 23:58:17.252336 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 23:58:17.263048 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 23:58:17.269637 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 23:58:17.273022 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 23:58:17.281968 systemd[1]: Starting systemd-journal-flush.service... Oct 2 23:58:17.290649 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 23:58:17.293036 systemd[1]: Starting systemd-random-seed.service... Oct 2 23:58:17.294028 systemd-journald[934]: Time spent on flushing to /var/log/journal/0c0e918616064da5b8e585cde05a7200 is 11.139ms for 1263 entries. Oct 2 23:58:17.294028 systemd-journald[934]: System Journal (/var/log/journal/0c0e918616064da5b8e585cde05a7200) is 8.0M, max 195.6M, 187.6M free. Oct 2 23:58:17.324157 systemd-journald[934]: Received client request to flush runtime journal. Oct 2 23:58:17.307493 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 23:58:17.307974 systemd[1]: Starting systemd-sysctl.service... Oct 2 23:58:17.318004 systemd[1]: Starting systemd-sysusers.service... Oct 2 23:58:17.324983 systemd[1]: Starting systemd-udev-settle.service... Oct 2 23:58:17.332478 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 23:58:17.340551 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 23:58:17.348595 systemd[1]: Finished systemd-journal-flush.service. Oct 2 23:58:17.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.356595 systemd[1]: Finished systemd-random-seed.service. Oct 2 23:58:17.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.364558 systemd[1]: Finished systemd-sysctl.service. Oct 2 23:58:17.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.372563 systemd[1]: Finished systemd-sysusers.service. Oct 2 23:58:17.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.381507 systemd[1]: Reached target first-boot-complete.target. Oct 2 23:58:17.390084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 23:58:17.399346 udevadm[950]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 23:58:17.408396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 23:58:17.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.592809 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 23:58:17.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.601000 audit: BPF prog-id=19 op=LOAD Oct 2 23:58:17.601000 audit: BPF prog-id=20 op=LOAD Oct 2 23:58:17.601000 audit: BPF prog-id=6 op=UNLOAD Oct 2 23:58:17.601000 audit: BPF prog-id=7 op=UNLOAD Oct 2 23:58:17.602607 systemd[1]: Starting systemd-udevd.service... Oct 2 23:58:17.613838 systemd-udevd[953]: Using default interface naming scheme 'v252'. Oct 2 23:58:17.630816 systemd[1]: Started systemd-udevd.service. Oct 2 23:58:17.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.641039 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Oct 2 23:58:17.641000 audit: BPF prog-id=21 op=LOAD Oct 2 23:58:17.642378 systemd[1]: Starting systemd-networkd.service... Oct 2 23:58:17.661381 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 23:58:17.687377 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Oct 2 23:58:17.701000 audit: BPF prog-id=22 op=LOAD Oct 2 23:58:17.701000 audit: BPF prog-id=23 op=LOAD Oct 2 23:58:17.701000 audit: BPF prog-id=24 op=LOAD Oct 2 23:58:17.702375 systemd[1]: Starting systemd-userdbd.service... Oct 2 23:58:17.718595 kernel: ACPI: button: Sleep Button [SLPB] Oct 2 23:58:17.718698 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 2 23:58:17.737375 kernel: ACPI: button: Power Button [PWRF] Oct 2 23:58:17.738162 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 23:58:17.694000 audit[991]: AVC avc: denied { confidentiality } for pid=991 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 23:58:17.694000 audit[991]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55985a785bb0 a1=4d8bc a2=7efc89ad3bc5 a3=5 items=40 ppid=953 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:17.694000 audit: CWD cwd="/" Oct 2 23:58:17.694000 audit: PATH item=0 name=(null) inode=27043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=1 name=(null) inode=27044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=2 name=(null) inode=27043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=3 name=(null) inode=27045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=4 name=(null) inode=27043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=5 name=(null) inode=27046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=6 name=(null) inode=27046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=7 name=(null) inode=27047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=8 name=(null) inode=27046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=9 name=(null) inode=27048 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=10 name=(null) inode=27046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=11 name=(null) inode=27049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=12 name=(null) inode=27046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=13 name=(null) inode=27050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=14 name=(null) inode=27046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=15 name=(null) inode=27051 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=16 name=(null) inode=27043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=17 name=(null) inode=27052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=18 name=(null) inode=27052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=19 name=(null) inode=27053 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=20 name=(null) inode=27052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=21 name=(null) inode=27054 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=22 name=(null) inode=27052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=23 name=(null) inode=27055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=24 name=(null) inode=27052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=25 name=(null) inode=27056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=26 name=(null) inode=27052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=27 name=(null) inode=27057 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=28 name=(null) inode=27043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=29 name=(null) inode=27058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=30 name=(null) inode=27058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=31 name=(null) inode=27059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=32 name=(null) inode=27058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=33 name=(null) inode=27060 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=34 name=(null) inode=27058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=35 name=(null) inode=27061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=36 name=(null) inode=27058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=37 name=(null) inode=27062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=38 name=(null) inode=27058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PATH item=39 name=(null) inode=27063 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 23:58:17.694000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 23:58:17.755383 kernel: IPMI message handler: version 39.2 Oct 2 23:58:17.757381 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Oct 2 23:58:17.774687 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Oct 2 23:58:17.775615 systemd[1]: Started systemd-userdbd.service. Oct 2 23:58:17.789377 kernel: ipmi device interface Oct 2 23:58:17.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:17.851503 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Oct 2 23:58:17.851634 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Oct 2 23:58:17.868403 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Oct 2 23:58:17.903569 kernel: ipmi_si: IPMI System Interface driver Oct 2 23:58:17.903622 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Oct 2 23:58:17.903706 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Oct 2 23:58:17.920812 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Oct 2 23:58:17.952045 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Oct 2 23:58:17.952271 kernel: iTCO_vendor_support: vendor-support=0 Oct 2 23:58:17.967377 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Oct 2 23:58:18.027076 systemd-networkd[997]: bond0: netdev ready Oct 2 23:58:18.027380 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Oct 2 23:58:18.027477 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Oct 2 23:58:18.027567 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Oct 2 23:58:18.027634 kernel: ipmi_si: Adding ACPI-specified kcs state machine Oct 2 23:58:18.027645 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Oct 2 23:58:18.029182 systemd-networkd[997]: lo: Link UP Oct 2 23:58:18.029184 systemd-networkd[997]: lo: Gained carrier Oct 2 23:58:18.029486 systemd-networkd[997]: Enumeration completed Oct 2 23:58:18.029545 systemd[1]: Started systemd-networkd.service. Oct 2 23:58:18.029771 systemd-networkd[997]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Oct 2 23:58:18.032176 systemd-networkd[997]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:3e:b9.network. Oct 2 23:58:18.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.136372 kernel: intel_rapl_common: Found RAPL domain package Oct 2 23:58:18.136438 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Oct 2 23:58:18.136879 kernel: intel_rapl_common: Found RAPL domain core Oct 2 23:58:18.174688 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Oct 2 23:58:18.175480 kernel: intel_rapl_common: Found RAPL domain dram Oct 2 23:58:18.285373 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Oct 2 23:58:18.304409 kernel: ipmi_ssif: IPMI SSIF Interface driver Oct 2 23:58:18.310657 systemd[1]: Finished systemd-udev-settle.service. Oct 2 23:58:18.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.319092 systemd[1]: Starting lvm2-activation-early.service... Oct 2 23:58:18.337217 lvm[1057]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 23:58:18.362753 systemd[1]: Finished lvm2-activation-early.service. Oct 2 23:58:18.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.371498 systemd[1]: Reached target cryptsetup.target. Oct 2 23:58:18.380001 systemd[1]: Starting lvm2-activation.service... Oct 2 23:58:18.382053 lvm[1058]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 23:58:18.411763 systemd[1]: Finished lvm2-activation.service. Oct 2 23:58:18.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.420487 systemd[1]: Reached target local-fs-pre.target. Oct 2 23:58:18.428479 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 23:58:18.428493 systemd[1]: Reached target local-fs.target. Oct 2 23:58:18.436478 systemd[1]: Reached target machines.target. Oct 2 23:58:18.445009 systemd[1]: Starting ldconfig.service... Oct 2 23:58:18.451948 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 23:58:18.451969 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 23:58:18.452522 systemd[1]: Starting systemd-boot-update.service... Oct 2 23:58:18.459937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 23:58:18.470032 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 23:58:18.470132 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 23:58:18.470154 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 23:58:18.470662 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 23:58:18.470935 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1060 (bootctl) Oct 2 23:58:18.471478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 23:58:18.482158 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 23:58:18.483140 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 23:58:18.486255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 23:58:18.486560 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 23:58:18.486673 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 23:58:18.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.491827 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 23:58:18.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.550076 systemd-fsck[1068]: fsck.fat 4.2 (2021-01-31) Oct 2 23:58:18.550076 systemd-fsck[1068]: /dev/sdb1: 789 files, 115069/258078 clusters Oct 2 23:58:18.550770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 23:58:18.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.562082 systemd[1]: Mounting boot.mount... Oct 2 23:58:18.584343 systemd[1]: Mounted boot.mount. Oct 2 23:58:18.603619 systemd[1]: Finished systemd-boot-update.service. Oct 2 23:58:18.614441 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Oct 2 23:58:18.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.632945 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 23:58:18.638905 systemd-networkd[997]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:3e:b8.network. Oct 2 23:58:18.639407 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Oct 2 23:58:18.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:18.647191 systemd[1]: Starting audit-rules.service... Oct 2 23:58:18.661555 systemd[1]: Starting clean-ca-certificates.service... Oct 2 23:58:18.668000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 23:58:18.668000 audit[1090]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf4cbfe50 a2=420 a3=0 items=0 ppid=1074 pid=1090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:18.668000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 23:58:18.669101 augenrules[1090]: No rules Oct 2 23:58:18.672521 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Oct 2 23:58:18.681078 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 23:58:18.690420 systemd[1]: Starting systemd-resolved.service... Oct 2 23:58:18.698287 systemd[1]: Starting systemd-timesyncd.service... Oct 2 23:58:18.705954 systemd[1]: Starting systemd-update-utmp.service... Oct 2 23:58:18.712714 systemd[1]: Finished audit-rules.service. Oct 2 23:58:18.719576 systemd[1]: Finished clean-ca-certificates.service. Oct 2 23:58:18.727628 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 23:58:18.739174 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 23:58:18.739810 systemd[1]: Finished systemd-update-utmp.service. Oct 2 23:58:18.782213 systemd[1]: Started systemd-timesyncd.service. Oct 2 23:58:18.784105 systemd-resolved[1096]: Positive Trust Anchors: Oct 2 23:58:18.784111 systemd-resolved[1096]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 23:58:18.784133 systemd-resolved[1096]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 23:58:18.786748 ldconfig[1059]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 23:58:18.800427 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Oct 2 23:58:18.807819 systemd-resolved[1096]: Using system hostname 'ci-3510.3.0-a-39a3b4667d'. Oct 2 23:58:18.813632 systemd[1]: Finished ldconfig.service. Oct 2 23:58:18.818486 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Oct 2 23:58:18.831378 systemd[1]: Reached target time-set.target. Oct 2 23:58:18.843432 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Oct 2 23:58:18.858149 systemd[1]: Starting systemd-update-done.service... Oct 2 23:58:18.864403 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Oct 2 23:58:18.864565 systemd-networkd[997]: bond0: Link UP Oct 2 23:58:18.864766 systemd-networkd[997]: enp1s0f1np1: Link UP Oct 2 23:58:18.864902 systemd-networkd[997]: enp1s0f0np0: Link UP Oct 2 23:58:18.865011 systemd-networkd[997]: enp1s0f1np1: Gained carrier Oct 2 23:58:18.865978 systemd-networkd[997]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:3e:b8.network. Oct 2 23:58:18.878701 systemd[1]: Started systemd-resolved.service. Oct 2 23:58:18.899767 systemd[1]: Finished systemd-update-done.service. Oct 2 23:58:18.903310 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Oct 2 23:58:18.903334 kernel: bond0: active interface up! Oct 2 23:58:18.918041 systemd[1]: Reached target network.target. Oct 2 23:58:18.930429 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Oct 2 23:58:18.938406 systemd[1]: Reached target nss-lookup.target. Oct 2 23:58:18.946453 systemd[1]: Reached target sysinit.target. Oct 2 23:58:18.954447 systemd[1]: Started motdgen.path. Oct 2 23:58:18.961422 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 23:58:18.971469 systemd[1]: Started logrotate.timer. Oct 2 23:58:18.978453 systemd[1]: Started mdadm.timer. Oct 2 23:58:18.985404 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 23:58:18.993405 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 23:58:18.993422 systemd[1]: Reached target paths.target. Oct 2 23:58:19.000399 systemd[1]: Reached target timers.target. Oct 2 23:58:19.007520 systemd[1]: Listening on dbus.socket. Oct 2 23:58:19.012479 systemd-networkd[997]: bond0: Gained carrier Oct 2 23:58:19.012573 systemd-networkd[997]: enp1s0f0np0: Gained carrier Oct 2 23:58:19.012607 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.014900 systemd[1]: Starting docker.socket... Oct 2 23:58:19.022953 systemd[1]: Listening on sshd.socket. Oct 2 23:58:19.025584 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.025728 systemd-networkd[997]: enp1s0f1np1: Link DOWN Oct 2 23:58:19.025731 systemd-networkd[997]: enp1s0f1np1: Lost carrier Oct 2 23:58:19.029505 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 23:58:19.029892 systemd[1]: Listening on docker.socket. Oct 2 23:58:19.033562 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.033674 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.036512 systemd[1]: Reached target sockets.target. Oct 2 23:58:19.037406 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.077399 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.094496 systemd[1]: Reached target basic.target. Oct 2 23:58:19.100420 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.115480 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 23:58:19.115493 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 23:58:19.116057 systemd[1]: Starting containerd.service... Oct 2 23:58:19.123427 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.138954 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 23:58:19.146400 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.164056 systemd[1]: Starting coreos-metadata.service... Oct 2 23:58:19.168405 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.183942 systemd[1]: Starting dbus.service... Oct 2 23:58:19.190397 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.190426 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Oct 2 23:58:19.201337 dbus-daemon[1111]: [system] SELinux support is enabled Oct 2 23:58:19.203205 coreos-metadata[1107]: Oct 02 23:58:19.201 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Oct 2 23:58:19.206168 coreos-metadata[1105]: Oct 02 23:58:19.206 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Oct 2 23:58:19.206372 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Oct 2 23:58:19.212593 systemd-networkd[997]: enp1s0f1np1: Link UP Oct 2 23:58:19.212835 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.212886 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.212888 systemd-networkd[997]: enp1s0f1np1: Gained carrier Oct 2 23:58:19.225419 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Oct 2 23:58:19.248018 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 23:58:19.252840 jq[1115]: false Oct 2 23:58:19.254969 systemd[1]: Starting extend-filesystems.service... Oct 2 23:58:19.258507 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.258542 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.258608 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:19.261419 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 23:58:19.261968 systemd[1]: Starting motdgen.service... Oct 2 23:58:19.262407 extend-filesystems[1116]: Found sda Oct 2 23:58:19.262407 extend-filesystems[1116]: Found sdb Oct 2 23:58:19.304497 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Oct 2 23:58:19.268995 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 23:58:19.304578 extend-filesystems[1116]: Found sdb1 Oct 2 23:58:19.304578 extend-filesystems[1116]: Found sdb2 Oct 2 23:58:19.304578 extend-filesystems[1116]: Found sdb3 Oct 2 23:58:19.304578 extend-filesystems[1116]: Found usr Oct 2 23:58:19.304578 extend-filesystems[1116]: Found sdb4 Oct 2 23:58:19.304578 extend-filesystems[1116]: Found sdb6 Oct 2 23:58:19.304578 extend-filesystems[1116]: Found sdb7 Oct 2 23:58:19.304578 extend-filesystems[1116]: Found sdb9 Oct 2 23:58:19.304578 extend-filesystems[1116]: Checking size of /dev/sdb9 Oct 2 23:58:19.304578 extend-filesystems[1116]: Resized partition /dev/sdb9 Oct 2 23:58:19.435461 kernel: bond0: (slave enp1s0f1np1): link status up again after 100 ms Oct 2 23:58:19.435480 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Oct 2 23:58:19.298304 systemd[1]: Starting prepare-critools.service... Oct 2 23:58:19.435662 extend-filesystems[1130]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 23:58:19.312056 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 23:58:19.330949 systemd[1]: Starting sshd-keygen.service... Oct 2 23:58:19.371035 systemd[1]: Starting systemd-logind.service... Oct 2 23:58:19.389443 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 23:58:19.389968 systemd[1]: Starting tcsd.service... Oct 2 23:58:19.450895 jq[1145]: true Oct 2 23:58:19.395434 systemd-logind[1142]: Watching system buttons on /dev/input/event3 (Power Button) Oct 2 23:58:19.395445 systemd-logind[1142]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 2 23:58:19.395454 systemd-logind[1142]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Oct 2 23:58:19.395589 systemd-logind[1142]: New seat seat0. Oct 2 23:58:19.397673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 23:58:19.398007 systemd[1]: Starting update-engine.service... Oct 2 23:58:19.409105 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 23:58:19.420747 systemd[1]: Started dbus.service. Oct 2 23:58:19.444137 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 23:58:19.444225 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 23:58:19.444365 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 23:58:19.444445 systemd[1]: Finished motdgen.service. Oct 2 23:58:19.451619 update_engine[1144]: I1002 23:58:19.451126 1144 main.cc:92] Flatcar Update Engine starting Oct 2 23:58:19.454817 update_engine[1144]: I1002 23:58:19.454780 1144 update_check_scheduler.cc:74] Next update check in 6m34s Oct 2 23:58:19.458337 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 23:58:19.458431 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 23:58:19.465480 tar[1147]: ./ Oct 2 23:58:19.465480 tar[1147]: ./loopback Oct 2 23:58:19.469496 dbus-daemon[1111]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 23:58:19.469957 jq[1151]: false Oct 2 23:58:19.470210 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Oct 2 23:58:19.470311 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Oct 2 23:58:19.470481 tar[1148]: crictl Oct 2 23:58:19.476223 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Oct 2 23:58:19.476343 systemd[1]: Condition check resulted in tcsd.service being skipped. Oct 2 23:58:19.477648 systemd[1]: Started update-engine.service. Oct 2 23:58:19.479508 env[1152]: time="2023-10-02T23:58:19.479485561Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 23:58:19.487475 tar[1147]: ./bandwidth Oct 2 23:58:19.487733 env[1152]: time="2023-10-02T23:58:19.487720112Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 23:58:19.487793 env[1152]: time="2023-10-02T23:58:19.487779696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488421 env[1152]: time="2023-10-02T23:58:19.488357054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488421 env[1152]: time="2023-10-02T23:58:19.488376622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488496 env[1152]: time="2023-10-02T23:58:19.488478165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488496 env[1152]: time="2023-10-02T23:58:19.488488541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488538 env[1152]: time="2023-10-02T23:58:19.488495651Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 23:58:19.488538 env[1152]: time="2023-10-02T23:58:19.488501252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488573 env[1152]: time="2023-10-02T23:58:19.488540166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488660 env[1152]: time="2023-10-02T23:58:19.488652740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488725 env[1152]: time="2023-10-02T23:58:19.488716542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 23:58:19.488744 env[1152]: time="2023-10-02T23:58:19.488725633Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 23:58:19.488761 env[1152]: time="2023-10-02T23:58:19.488750525Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 23:58:19.488761 env[1152]: time="2023-10-02T23:58:19.488757985Z" level=info msg="metadata content store policy set" policy=shared Oct 2 23:58:19.489440 systemd[1]: Started systemd-logind.service. Oct 2 23:58:19.499966 systemd[1]: Started locksmithd.service. Oct 2 23:58:19.504788 env[1152]: time="2023-10-02T23:58:19.504774385Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 23:58:19.504829 env[1152]: time="2023-10-02T23:58:19.504792457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 23:58:19.504829 env[1152]: time="2023-10-02T23:58:19.504800489Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 23:58:19.504829 env[1152]: time="2023-10-02T23:58:19.504817417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504829 env[1152]: time="2023-10-02T23:58:19.504826935Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504898 env[1152]: time="2023-10-02T23:58:19.504834399Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504898 env[1152]: time="2023-10-02T23:58:19.504841352Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504898 env[1152]: time="2023-10-02T23:58:19.504848909Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504898 env[1152]: time="2023-10-02T23:58:19.504856120Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504898 env[1152]: time="2023-10-02T23:58:19.504863716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504898 env[1152]: time="2023-10-02T23:58:19.504870274Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.504898 env[1152]: time="2023-10-02T23:58:19.504876976Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 23:58:19.505005 env[1152]: time="2023-10-02T23:58:19.504929872Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 23:58:19.505005 env[1152]: time="2023-10-02T23:58:19.504974373Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 23:58:19.505131 env[1152]: time="2023-10-02T23:58:19.505115598Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 23:58:19.505168 env[1152]: time="2023-10-02T23:58:19.505142251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505168 env[1152]: time="2023-10-02T23:58:19.505157105Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 23:58:19.505220 env[1152]: time="2023-10-02T23:58:19.505195711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505220 env[1152]: time="2023-10-02T23:58:19.505209345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505274 env[1152]: time="2023-10-02T23:58:19.505221028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505274 env[1152]: time="2023-10-02T23:58:19.505232292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505274 env[1152]: time="2023-10-02T23:58:19.505244179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505274 env[1152]: time="2023-10-02T23:58:19.505255245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505274 env[1152]: time="2023-10-02T23:58:19.505266964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505406 env[1152]: time="2023-10-02T23:58:19.505277837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505406 env[1152]: time="2023-10-02T23:58:19.505291775Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 23:58:19.505406 env[1152]: time="2023-10-02T23:58:19.505391622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505495 env[1152]: time="2023-10-02T23:58:19.505405263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505495 env[1152]: time="2023-10-02T23:58:19.505430812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505495 env[1152]: time="2023-10-02T23:58:19.505444233Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 23:58:19.505495 env[1152]: time="2023-10-02T23:58:19.505457924Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 23:58:19.505495 env[1152]: time="2023-10-02T23:58:19.505470131Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 23:58:19.505495 env[1152]: time="2023-10-02T23:58:19.505487820Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 23:58:19.505642 env[1152]: time="2023-10-02T23:58:19.505515469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 23:58:19.505714 env[1152]: time="2023-10-02T23:58:19.505685385Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.505721475Z" level=info msg="Connect containerd service" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.505745404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506024797Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506113239Z" level=info msg="Start subscribing containerd event" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506141060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506143824Z" level=info msg="Start recovering state" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506164715Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506181904Z" level=info msg="Start event monitor" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506192915Z" level=info msg="Start snapshots syncer" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506198792Z" level=info msg="Start cni network conf syncer for default" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506202777Z" level=info msg="Start streaming server" Oct 2 23:58:19.508276 env[1152]: time="2023-10-02T23:58:19.506190595Z" level=info msg="containerd successfully booted in 0.027027s" Oct 2 23:58:19.506515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 23:58:19.506648 systemd[1]: Reached target system-config.target. Oct 2 23:58:19.515467 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 23:58:19.515541 systemd[1]: Reached target user-config.target. Oct 2 23:58:19.524179 tar[1147]: ./ptp Oct 2 23:58:19.524979 systemd[1]: Started containerd.service. Oct 2 23:58:19.554276 tar[1147]: ./vlan Oct 2 23:58:19.584024 tar[1147]: ./host-device Oct 2 23:58:19.589491 locksmithd[1172]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 23:58:19.610054 tar[1147]: ./tuning Oct 2 23:58:19.633835 tar[1147]: ./vrf Oct 2 23:58:19.658667 tar[1147]: ./sbr Oct 2 23:58:19.683242 tar[1147]: ./tap Oct 2 23:58:19.710038 tar[1147]: ./dhcp Oct 2 23:58:19.782047 tar[1147]: ./static Oct 2 23:58:19.782674 systemd[1]: Finished prepare-critools.service. Oct 2 23:58:19.801328 tar[1147]: ./firewall Oct 2 23:58:19.830746 tar[1147]: ./macvlan Oct 2 23:58:19.858193 tar[1147]: ./dummy Oct 2 23:58:19.884836 tar[1147]: ./bridge Oct 2 23:58:19.914021 tar[1147]: ./ipvlan Oct 2 23:58:19.940232 tar[1147]: ./portmap Oct 2 23:58:19.965442 tar[1147]: ./host-local Oct 2 23:58:19.966373 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Oct 2 23:58:19.992514 extend-filesystems[1130]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Oct 2 23:58:19.992514 extend-filesystems[1130]: old_desc_blocks = 1, new_desc_blocks = 56 Oct 2 23:58:19.992514 extend-filesystems[1130]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Oct 2 23:58:20.029468 extend-filesystems[1116]: Resized filesystem in /dev/sdb9 Oct 2 23:58:19.993008 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 23:58:19.993098 systemd[1]: Finished extend-filesystems.service. Oct 2 23:58:20.023545 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 23:58:20.562332 sshd_keygen[1141]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 23:58:20.574215 systemd[1]: Finished sshd-keygen.service. Oct 2 23:58:20.582260 systemd[1]: Starting issuegen.service... Oct 2 23:58:20.590659 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 23:58:20.590781 systemd[1]: Finished issuegen.service. Oct 2 23:58:20.599231 systemd[1]: Starting systemd-user-sessions.service... Oct 2 23:58:20.608628 systemd[1]: Finished systemd-user-sessions.service. Oct 2 23:58:20.618100 systemd[1]: Started getty@tty1.service. Oct 2 23:58:20.625967 systemd[1]: Started serial-getty@ttyS1.service. Oct 2 23:58:20.627451 systemd-networkd[997]: bond0: Gained IPv6LL Oct 2 23:58:20.627640 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:20.635491 systemd[1]: Reached target getty.target. Oct 2 23:58:20.883544 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:20.883701 systemd-timesyncd[1097]: Network configuration changed, trying to establish connection. Oct 2 23:58:21.591598 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Oct 2 23:58:25.364792 coreos-metadata[1107]: Oct 02 23:58:25.364 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Oct 2 23:58:25.365560 coreos-metadata[1105]: Oct 02 23:58:25.364 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Oct 2 23:58:25.647653 login[1198]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 23:58:25.654562 login[1197]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 23:58:25.655092 systemd[1]: Created slice user-500.slice. Oct 2 23:58:25.655717 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 23:58:25.656786 systemd-logind[1142]: New session 1 of user core. Oct 2 23:58:25.658941 systemd-logind[1142]: New session 2 of user core. Oct 2 23:58:25.661337 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 23:58:25.662081 systemd[1]: Starting user@500.service... Oct 2 23:58:25.664522 (systemd)[1202]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:25.732099 systemd[1202]: Queued start job for default target default.target. Oct 2 23:58:25.732339 systemd[1202]: Reached target paths.target. Oct 2 23:58:25.732351 systemd[1202]: Reached target sockets.target. Oct 2 23:58:25.732359 systemd[1202]: Reached target timers.target. Oct 2 23:58:25.732369 systemd[1202]: Reached target basic.target. Oct 2 23:58:25.732389 systemd[1202]: Reached target default.target. Oct 2 23:58:25.732403 systemd[1202]: Startup finished in 64ms. Oct 2 23:58:25.732464 systemd[1]: Started user@500.service. Oct 2 23:58:25.733003 systemd[1]: Started session-1.scope. Oct 2 23:58:25.733352 systemd[1]: Started session-2.scope. Oct 2 23:58:26.365258 coreos-metadata[1105]: Oct 02 23:58:26.365 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Oct 2 23:58:26.365540 coreos-metadata[1107]: Oct 02 23:58:26.365 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Oct 2 23:58:26.977735 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Oct 2 23:58:26.977906 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Oct 2 23:58:27.440397 coreos-metadata[1107]: Oct 02 23:58:27.440 INFO Fetch successful Oct 2 23:58:27.441361 coreos-metadata[1105]: Oct 02 23:58:27.440 INFO Fetch successful Oct 2 23:58:27.464828 systemd[1]: Finished coreos-metadata.service. Oct 2 23:58:27.465687 systemd[1]: Started packet-phone-home.service. Oct 2 23:58:27.465973 unknown[1105]: wrote ssh authorized keys file for user: core Oct 2 23:58:27.473027 curl[1224]: % Total % Received % Xferd Average Speed Time Time Time Current Oct 2 23:58:27.473181 curl[1224]: Dload Upload Total Spent Left Speed Oct 2 23:58:27.483236 update-ssh-keys[1225]: Updated "/home/core/.ssh/authorized_keys" Oct 2 23:58:27.483433 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 23:58:27.483618 systemd[1]: Reached target multi-user.target. Oct 2 23:58:27.484228 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 23:58:27.488213 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 23:58:27.488285 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 23:58:27.488364 systemd[1]: Startup finished in 1.851s (kernel) + 6.410s (initrd) + 14.242s (userspace) = 22.504s. Oct 2 23:58:27.630059 curl[1224]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Oct 2 23:58:27.632443 systemd[1]: packet-phone-home.service: Deactivated successfully. Oct 2 23:58:29.754919 systemd[1]: Created slice system-sshd.slice. Oct 2 23:58:29.757809 systemd[1]: Started sshd@0-139.178.89.117:22-139.178.89.65:46358.service. Oct 2 23:58:29.845899 sshd[1228]: Accepted publickey for core from 139.178.89.65 port 46358 ssh2: RSA SHA256:6bSavBiaJ/6Bay5oW/hArqm18cB9FuXY6RiKsI2WLUU Oct 2 23:58:29.846753 sshd[1228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:29.849715 systemd-logind[1142]: New session 3 of user core. Oct 2 23:58:29.850231 systemd[1]: Started session-3.scope. Oct 2 23:58:29.901727 systemd[1]: Started sshd@1-139.178.89.117:22-139.178.89.65:46364.service. Oct 2 23:58:29.936677 sshd[1233]: Accepted publickey for core from 139.178.89.65 port 46364 ssh2: RSA SHA256:6bSavBiaJ/6Bay5oW/hArqm18cB9FuXY6RiKsI2WLUU Oct 2 23:58:29.937397 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:29.939848 systemd-logind[1142]: New session 4 of user core. Oct 2 23:58:29.940279 systemd[1]: Started session-4.scope. Oct 2 23:58:29.993010 sshd[1233]: pam_unix(sshd:session): session closed for user core Oct 2 23:58:29.994420 systemd[1]: sshd@1-139.178.89.117:22-139.178.89.65:46364.service: Deactivated successfully. Oct 2 23:58:29.994724 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 23:58:29.995032 systemd-logind[1142]: Session 4 logged out. Waiting for processes to exit. Oct 2 23:58:29.995549 systemd[1]: Started sshd@2-139.178.89.117:22-139.178.89.65:46366.service. Oct 2 23:58:29.995970 systemd-logind[1142]: Removed session 4. Oct 2 23:58:30.031930 sshd[1239]: Accepted publickey for core from 139.178.89.65 port 46366 ssh2: RSA SHA256:6bSavBiaJ/6Bay5oW/hArqm18cB9FuXY6RiKsI2WLUU Oct 2 23:58:30.032887 sshd[1239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:30.036281 systemd-logind[1142]: New session 5 of user core. Oct 2 23:58:30.036988 systemd[1]: Started session-5.scope. Oct 2 23:58:30.090326 sshd[1239]: pam_unix(sshd:session): session closed for user core Oct 2 23:58:30.091753 systemd[1]: sshd@2-139.178.89.117:22-139.178.89.65:46366.service: Deactivated successfully. Oct 2 23:58:30.092052 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 23:58:30.092315 systemd-logind[1142]: Session 5 logged out. Waiting for processes to exit. Oct 2 23:58:30.092920 systemd[1]: Started sshd@3-139.178.89.117:22-139.178.89.65:46368.service. Oct 2 23:58:30.093306 systemd-logind[1142]: Removed session 5. Oct 2 23:58:30.128790 sshd[1245]: Accepted publickey for core from 139.178.89.65 port 46368 ssh2: RSA SHA256:6bSavBiaJ/6Bay5oW/hArqm18cB9FuXY6RiKsI2WLUU Oct 2 23:58:30.129705 sshd[1245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:30.132898 systemd-logind[1142]: New session 6 of user core. Oct 2 23:58:30.133545 systemd[1]: Started session-6.scope. Oct 2 23:58:30.188107 sshd[1245]: pam_unix(sshd:session): session closed for user core Oct 2 23:58:30.189526 systemd[1]: sshd@3-139.178.89.117:22-139.178.89.65:46368.service: Deactivated successfully. Oct 2 23:58:30.189827 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 23:58:30.190185 systemd-logind[1142]: Session 6 logged out. Waiting for processes to exit. Oct 2 23:58:30.190689 systemd[1]: Started sshd@4-139.178.89.117:22-139.178.89.65:46376.service. Oct 2 23:58:30.191081 systemd-logind[1142]: Removed session 6. Oct 2 23:58:30.226740 sshd[1251]: Accepted publickey for core from 139.178.89.65 port 46376 ssh2: RSA SHA256:6bSavBiaJ/6Bay5oW/hArqm18cB9FuXY6RiKsI2WLUU Oct 2 23:58:30.227785 sshd[1251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:30.231231 systemd-logind[1142]: New session 7 of user core. Oct 2 23:58:30.231937 systemd[1]: Started session-7.scope. Oct 2 23:58:30.316988 sudo[1254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 23:58:30.317592 sudo[1254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 23:58:30.331830 dbus-daemon[1111]: \xd0m\xaf\xf1\xa9U: received setenforce notice (enforcing=-1869613344) Oct 2 23:58:30.336773 sudo[1254]: pam_unix(sudo:session): session closed for user root Oct 2 23:58:30.342179 sshd[1251]: pam_unix(sshd:session): session closed for user core Oct 2 23:58:30.348979 systemd[1]: sshd@4-139.178.89.117:22-139.178.89.65:46376.service: Deactivated successfully. Oct 2 23:58:30.350610 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 23:58:30.352384 systemd-logind[1142]: Session 7 logged out. Waiting for processes to exit. Oct 2 23:58:30.354918 systemd[1]: Started sshd@5-139.178.89.117:22-139.178.89.65:46384.service. Oct 2 23:58:30.357242 systemd-logind[1142]: Removed session 7. Oct 2 23:58:30.462603 sshd[1258]: Accepted publickey for core from 139.178.89.65 port 46384 ssh2: RSA SHA256:6bSavBiaJ/6Bay5oW/hArqm18cB9FuXY6RiKsI2WLUU Oct 2 23:58:30.464510 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:30.470288 systemd-logind[1142]: New session 8 of user core. Oct 2 23:58:30.471540 systemd[1]: Started session-8.scope. Oct 2 23:58:30.530953 sudo[1262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 23:58:30.531096 sudo[1262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 23:58:30.532889 sudo[1262]: pam_unix(sudo:session): session closed for user root Oct 2 23:58:30.535135 sudo[1261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 23:58:30.535239 sudo[1261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 23:58:30.540488 systemd[1]: Stopping audit-rules.service... Oct 2 23:58:30.540000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 23:58:30.541345 auditctl[1265]: No rules Oct 2 23:58:30.541525 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 23:58:30.541614 systemd[1]: Stopped audit-rules.service. Oct 2 23:58:30.542382 systemd[1]: Starting audit-rules.service... Oct 2 23:58:30.546861 kernel: kauditd_printk_skb: 110 callbacks suppressed Oct 2 23:58:30.546892 kernel: audit: type=1305 audit(1696291110.540:133): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 23:58:30.553108 augenrules[1282]: No rules Oct 2 23:58:30.553443 systemd[1]: Finished audit-rules.service. Oct 2 23:58:30.553940 sudo[1261]: pam_unix(sudo:session): session closed for user root Oct 2 23:58:30.554832 sshd[1258]: pam_unix(sshd:session): session closed for user core Oct 2 23:58:30.556419 systemd[1]: sshd@5-139.178.89.117:22-139.178.89.65:46384.service: Deactivated successfully. Oct 2 23:58:30.556773 systemd[1]: session-8.scope: Deactivated successfully. Oct 2 23:58:30.557209 systemd-logind[1142]: Session 8 logged out. Waiting for processes to exit. Oct 2 23:58:30.557775 systemd[1]: Started sshd@6-139.178.89.117:22-139.178.89.65:46394.service. Oct 2 23:58:30.558217 systemd-logind[1142]: Removed session 8. Oct 2 23:58:30.540000 audit[1265]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd97070f10 a2=420 a3=0 items=0 ppid=1 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:30.593441 kernel: audit: type=1300 audit(1696291110.540:133): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd97070f10 a2=420 a3=0 items=0 ppid=1 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:30.593483 kernel: audit: type=1327 audit(1696291110.540:133): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 23:58:30.540000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 23:58:30.602925 kernel: audit: type=1131 audit(1696291110.541:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.625365 kernel: audit: type=1130 audit(1696291110.553:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.632624 sshd[1288]: Accepted publickey for core from 139.178.89.65 port 46394 ssh2: RSA SHA256:6bSavBiaJ/6Bay5oW/hArqm18cB9FuXY6RiKsI2WLUU Oct 2 23:58:30.634676 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 23:58:30.636929 systemd-logind[1142]: New session 9 of user core. Oct 2 23:58:30.637402 systemd[1]: Started session-9.scope. Oct 2 23:58:30.647814 kernel: audit: type=1106 audit(1696291110.553:136): pid=1261 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.553000 audit[1261]: USER_END pid=1261 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.673805 kernel: audit: type=1104 audit(1696291110.553:137): pid=1261 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.553000 audit[1261]: CRED_DISP pid=1261 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.684481 sudo[1291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 23:58:30.684591 sudo[1291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 23:58:30.697335 kernel: audit: type=1106 audit(1696291110.555:138): pid=1258 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.555000 audit[1258]: USER_END pid=1258 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.555000 audit[1258]: CRED_DISP pid=1258 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.755161 kernel: audit: type=1104 audit(1696291110.555:139): pid=1258 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.755189 kernel: audit: type=1131 audit(1696291110.556:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.89.117:22-139.178.89.65:46384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.89.117:22-139.178.89.65:46384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.89.117:22-139.178.89.65:46394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.632000 audit[1288]: USER_ACCT pid=1288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.634000 audit[1288]: CRED_ACQ pid=1288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.634000 audit[1288]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4d44ec30 a2=3 a3=0 items=0 ppid=1 pid=1288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:30.634000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 23:58:30.639000 audit[1288]: USER_START pid=1288 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.639000 audit[1290]: CRED_ACQ pid=1290 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:30.684000 audit[1291]: USER_ACCT pid=1291 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.684000 audit[1291]: CRED_REFR pid=1291 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:30.685000 audit[1291]: USER_START pid=1291 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:31.532289 systemd[1]: Reloading. Oct 2 23:58:31.562225 /usr/lib/systemd/system-generators/torcx-generator[1321]: time="2023-10-02T23:58:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 23:58:31.562243 /usr/lib/systemd/system-generators/torcx-generator[1321]: time="2023-10-02T23:58:31Z" level=info msg="torcx already run" Oct 2 23:58:31.617828 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 23:58:31.617837 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 23:58:31.632105 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit: BPF prog-id=32 op=LOAD Oct 2 23:58:31.674000 audit: BPF prog-id=25 op=UNLOAD Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit: BPF prog-id=33 op=LOAD Oct 2 23:58:31.675000 audit: BPF prog-id=30 op=UNLOAD Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit: BPF prog-id=34 op=LOAD Oct 2 23:58:31.675000 audit: BPF prog-id=22 op=UNLOAD Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit: BPF prog-id=35 op=LOAD Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.675000 audit: BPF prog-id=36 op=LOAD Oct 2 23:58:31.675000 audit: BPF prog-id=23 op=UNLOAD Oct 2 23:58:31.675000 audit: BPF prog-id=24 op=UNLOAD Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.676000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit: BPF prog-id=37 op=LOAD Oct 2 23:58:31.677000 audit: BPF prog-id=27 op=UNLOAD Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit: BPF prog-id=38 op=LOAD Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit: BPF prog-id=39 op=LOAD Oct 2 23:58:31.677000 audit: BPF prog-id=28 op=UNLOAD Oct 2 23:58:31.677000 audit: BPF prog-id=29 op=UNLOAD Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.678000 audit: BPF prog-id=40 op=LOAD Oct 2 23:58:31.678000 audit: BPF prog-id=21 op=UNLOAD Oct 2 23:58:31.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit: BPF prog-id=41 op=LOAD Oct 2 23:58:31.679000 audit: BPF prog-id=26 op=UNLOAD Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit: BPF prog-id=42 op=LOAD Oct 2 23:58:31.679000 audit: BPF prog-id=16 op=UNLOAD Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit: BPF prog-id=43 op=LOAD Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.679000 audit: BPF prog-id=44 op=LOAD Oct 2 23:58:31.679000 audit: BPF prog-id=17 op=UNLOAD Oct 2 23:58:31.679000 audit: BPF prog-id=18 op=UNLOAD Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit: BPF prog-id=45 op=LOAD Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:31.680000 audit: BPF prog-id=46 op=LOAD Oct 2 23:58:31.680000 audit: BPF prog-id=19 op=UNLOAD Oct 2 23:58:31.680000 audit: BPF prog-id=20 op=UNLOAD Oct 2 23:58:31.684064 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 23:58:31.687644 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 23:58:31.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:31.687889 systemd[1]: Reached target network-online.target. Oct 2 23:58:31.688535 systemd[1]: Started kubelet.service. Oct 2 23:58:31.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:32.202948 kubelet[1378]: E1002 23:58:32.202832 1378 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 23:58:32.206894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 23:58:32.207259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 23:58:32.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 23:58:32.698468 systemd[1]: Stopped kubelet.service. Oct 2 23:58:32.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:32.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:32.709686 systemd[1]: Reloading. Oct 2 23:58:32.738259 /usr/lib/systemd/system-generators/torcx-generator[1487]: time="2023-10-02T23:58:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 23:58:32.738284 /usr/lib/systemd/system-generators/torcx-generator[1487]: time="2023-10-02T23:58:32Z" level=info msg="torcx already run" Oct 2 23:58:32.787853 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 23:58:32.787861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 23:58:32.801734 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.843000 audit: BPF prog-id=47 op=LOAD Oct 2 23:58:32.843000 audit: BPF prog-id=32 op=UNLOAD Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit: BPF prog-id=48 op=LOAD Oct 2 23:58:32.844000 audit: BPF prog-id=33 op=UNLOAD Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit: BPF prog-id=49 op=LOAD Oct 2 23:58:32.844000 audit: BPF prog-id=34 op=UNLOAD Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit: BPF prog-id=50 op=LOAD Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.844000 audit: BPF prog-id=51 op=LOAD Oct 2 23:58:32.844000 audit: BPF prog-id=35 op=UNLOAD Oct 2 23:58:32.845000 audit: BPF prog-id=36 op=UNLOAD Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit: BPF prog-id=52 op=LOAD Oct 2 23:58:32.846000 audit: BPF prog-id=37 op=UNLOAD Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit: BPF prog-id=53 op=LOAD Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit: BPF prog-id=54 op=LOAD Oct 2 23:58:32.846000 audit: BPF prog-id=38 op=UNLOAD Oct 2 23:58:32.846000 audit: BPF prog-id=39 op=UNLOAD Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.847000 audit: BPF prog-id=55 op=LOAD Oct 2 23:58:32.847000 audit: BPF prog-id=40 op=UNLOAD Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit: BPF prog-id=56 op=LOAD Oct 2 23:58:32.848000 audit: BPF prog-id=41 op=UNLOAD Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit: BPF prog-id=57 op=LOAD Oct 2 23:58:32.848000 audit: BPF prog-id=42 op=UNLOAD Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit: BPF prog-id=58 op=LOAD Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit: BPF prog-id=59 op=LOAD Oct 2 23:58:32.849000 audit: BPF prog-id=43 op=UNLOAD Oct 2 23:58:32.849000 audit: BPF prog-id=44 op=UNLOAD Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit: BPF prog-id=60 op=LOAD Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:32.849000 audit: BPF prog-id=61 op=LOAD Oct 2 23:58:32.849000 audit: BPF prog-id=45 op=UNLOAD Oct 2 23:58:32.849000 audit: BPF prog-id=46 op=UNLOAD Oct 2 23:58:32.855301 systemd[1]: Started kubelet.service. Oct 2 23:58:32.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:32.879109 kubelet[1543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 23:58:32.879109 kubelet[1543]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 23:58:32.879109 kubelet[1543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 23:58:32.879322 kubelet[1543]: I1002 23:58:32.879131 1543 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 23:58:33.076584 kubelet[1543]: I1002 23:58:33.076516 1543 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Oct 2 23:58:33.076584 kubelet[1543]: I1002 23:58:33.076545 1543 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 23:58:33.076666 kubelet[1543]: I1002 23:58:33.076661 1543 server.go:837] "Client rotation is on, will bootstrap in background" Oct 2 23:58:33.087746 kubelet[1543]: I1002 23:58:33.087714 1543 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 23:58:33.116737 kubelet[1543]: I1002 23:58:33.116700 1543 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 23:58:33.116825 kubelet[1543]: I1002 23:58:33.116790 1543 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 23:58:33.116869 kubelet[1543]: I1002 23:58:33.116845 1543 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 23:58:33.116869 kubelet[1543]: I1002 23:58:33.116854 1543 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 23:58:33.116869 kubelet[1543]: I1002 23:58:33.116860 1543 container_manager_linux.go:302] "Creating device plugin manager" Oct 2 23:58:33.117170 kubelet[1543]: I1002 23:58:33.117140 1543 state_mem.go:36] "Initialized new in-memory state store" Oct 2 23:58:33.126083 kubelet[1543]: I1002 23:58:33.126045 1543 kubelet.go:405] "Attempting to sync node with API server" Oct 2 23:58:33.126083 kubelet[1543]: I1002 23:58:33.126061 1543 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 23:58:33.126540 kubelet[1543]: I1002 23:58:33.126504 1543 kubelet.go:309] "Adding apiserver pod source" Oct 2 23:58:33.126540 kubelet[1543]: I1002 23:58:33.126527 1543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 23:58:33.126587 kubelet[1543]: E1002 23:58:33.126546 1543 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:33.126587 kubelet[1543]: E1002 23:58:33.126558 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:33.128814 kubelet[1543]: I1002 23:58:33.128766 1543 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 23:58:33.130921 kubelet[1543]: W1002 23:58:33.130871 1543 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 23:58:33.133780 kubelet[1543]: I1002 23:58:33.133708 1543 server.go:1168] "Started kubelet" Oct 2 23:58:33.134023 kubelet[1543]: I1002 23:58:33.133974 1543 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 23:58:33.134023 kubelet[1543]: I1002 23:58:33.133973 1543 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 23:58:33.134888 kubelet[1543]: E1002 23:58:33.134801 1543 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 23:58:33.134888 kubelet[1543]: E1002 23:58:33.134879 1543 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 23:58:33.137628 kubelet[1543]: I1002 23:58:33.137542 1543 server.go:461] "Adding debug handlers to kubelet server" Oct 2 23:58:33.137000 audit[1543]: AVC avc: denied { mac_admin } for pid=1543 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:33.137000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 23:58:33.137000 audit[1543]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000650bd0 a1=c0002c1d10 a2=c000650ba0 a3=25 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.137000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 23:58:33.137000 audit[1543]: AVC avc: denied { mac_admin } for pid=1543 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:33.137000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 23:58:33.137000 audit[1543]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000089280 a1=c0002c1d28 a2=c000650c60 a3=25 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.137000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 23:58:33.139310 kubelet[1543]: I1002 23:58:33.138151 1543 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 23:58:33.139310 kubelet[1543]: I1002 23:58:33.138269 1543 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 23:58:33.139310 kubelet[1543]: I1002 23:58:33.138462 1543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 23:58:33.139310 kubelet[1543]: I1002 23:58:33.138601 1543 volume_manager.go:284] "Starting Kubelet Volume Manager" Oct 2 23:58:33.140490 kubelet[1543]: I1002 23:58:33.140393 1543 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Oct 2 23:58:33.140823 kubelet[1543]: E1002 23:58:33.140756 1543 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.213\" not found" Oct 2 23:58:33.151804 kubelet[1543]: E1002 23:58:33.151755 1543 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.124.213\" not found" node="10.67.124.213" Oct 2 23:58:33.182466 kubelet[1543]: I1002 23:58:33.182446 1543 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 23:58:33.182466 kubelet[1543]: I1002 23:58:33.182464 1543 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 23:58:33.182604 kubelet[1543]: I1002 23:58:33.182480 1543 state_mem.go:36] "Initialized new in-memory state store" Oct 2 23:58:33.183354 kubelet[1543]: I1002 23:58:33.183337 1543 policy_none.go:49] "None policy: Start" Oct 2 23:58:33.184093 kubelet[1543]: I1002 23:58:33.184079 1543 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 23:58:33.184153 kubelet[1543]: I1002 23:58:33.184101 1543 state_mem.go:35] "Initializing new in-memory state store" Oct 2 23:58:33.189570 systemd[1]: Created slice kubepods.slice. Oct 2 23:58:33.192962 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 23:58:33.192000 audit[1569]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.192000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde23cbbf0 a2=0 a3=7ffde23cbbdc items=0 ppid=1543 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 23:58:33.193000 audit[1572]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.193000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd7cb8f060 a2=0 a3=7ffd7cb8f04c items=0 ppid=1543 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 23:58:33.195159 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 23:58:33.213301 kubelet[1543]: I1002 23:58:33.213267 1543 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 23:58:33.212000 audit[1543]: AVC avc: denied { mac_admin } for pid=1543 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:33.212000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 23:58:33.212000 audit[1543]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006511a0 a1=c000c43cc8 a2=c000651170 a3=25 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.212000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 23:58:33.213488 kubelet[1543]: I1002 23:58:33.213311 1543 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 23:58:33.213669 kubelet[1543]: I1002 23:58:33.213622 1543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 23:58:33.213732 kubelet[1543]: E1002 23:58:33.213720 1543 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.213\" not found" Oct 2 23:58:33.195000 audit[1574]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.195000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe308f0430 a2=0 a3=7ffe308f041c items=0 ppid=1543 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 23:58:33.224000 audit[1579]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.224000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc1a877700 a2=0 a3=7ffc1a8776ec items=0 ppid=1543 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 23:58:33.242376 kubelet[1543]: I1002 23:58:33.242359 1543 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.213" Oct 2 23:58:33.249141 kubelet[1543]: I1002 23:58:33.249098 1543 kubelet_node_status.go:73] "Successfully registered node" node="10.67.124.213" Oct 2 23:58:33.261644 kubelet[1543]: I1002 23:58:33.261630 1543 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 23:58:33.261808 env[1152]: time="2023-10-02T23:58:33.261783773Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 23:58:33.262039 kubelet[1543]: I1002 23:58:33.261877 1543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 23:58:33.261000 audit[1584]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.261000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe75606910 a2=0 a3=7ffe756068fc items=0 ppid=1543 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 23:58:33.262353 kubelet[1543]: I1002 23:58:33.262330 1543 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 23:58:33.262000 audit[1585]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:33.262000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffa5a26980 a2=0 a3=7fffa5a2696c items=0 ppid=1543 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 23:58:33.262000 audit[1586]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.262000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd29ceefc0 a2=0 a3=7ffd29ceefac items=0 ppid=1543 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 23:58:33.263089 kubelet[1543]: I1002 23:58:33.262923 1543 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 23:58:33.263089 kubelet[1543]: I1002 23:58:33.262941 1543 status_manager.go:207] "Starting to sync pod status with apiserver" Oct 2 23:58:33.263089 kubelet[1543]: I1002 23:58:33.262954 1543 kubelet.go:2257] "Starting kubelet main sync loop" Oct 2 23:58:33.263089 kubelet[1543]: E1002 23:58:33.262976 1543 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 23:58:33.263000 audit[1587]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:33.263000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf31e64e0 a2=0 a3=7ffcf31e64cc items=0 ppid=1543 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 23:58:33.263000 audit[1588]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.263000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe603b8790 a2=0 a3=7ffe603b877c items=0 ppid=1543 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 23:58:33.263000 audit[1589]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:33.263000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd59139050 a2=0 a3=7ffd5913903c items=0 ppid=1543 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 23:58:33.263000 audit[1590]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:33.263000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc9910330 a2=0 a3=7ffcc991031c items=0 ppid=1543 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 23:58:33.264000 audit[1591]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:33.264000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff9155af40 a2=0 a3=7fff9155af2c items=0 ppid=1543 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:33.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 23:58:34.078431 kubelet[1543]: I1002 23:58:34.078312 1543 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 23:58:34.079612 kubelet[1543]: W1002 23:58:34.078717 1543 reflector.go:456] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Oct 2 23:58:34.079612 kubelet[1543]: W1002 23:58:34.078794 1543 reflector.go:456] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Oct 2 23:58:34.079612 kubelet[1543]: W1002 23:58:34.078794 1543 reflector.go:456] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Oct 2 23:58:34.127828 kubelet[1543]: I1002 23:58:34.127704 1543 apiserver.go:52] "Watching apiserver" Oct 2 23:58:34.128113 kubelet[1543]: E1002 23:58:34.127706 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:34.133222 kubelet[1543]: I1002 23:58:34.133145 1543 topology_manager.go:212] "Topology Admit Handler" Oct 2 23:58:34.133493 kubelet[1543]: I1002 23:58:34.133421 1543 topology_manager.go:212] "Topology Admit Handler" Oct 2 23:58:34.144213 kubelet[1543]: I1002 23:58:34.144160 1543 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Oct 2 23:58:34.147256 systemd[1]: Created slice kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice. Oct 2 23:58:34.147484 kubelet[1543]: I1002 23:58:34.147272 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cni-path\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147484 kubelet[1543]: I1002 23:58:34.147310 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-hubble-tls\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147484 kubelet[1543]: I1002 23:58:34.147361 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggbc\" (UniqueName: \"kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-kube-api-access-2ggbc\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147484 kubelet[1543]: I1002 23:58:34.147386 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f230a61c-86f6-454b-84e6-43f1a5ac2ad9-lib-modules\") pod \"kube-proxy-j8f5t\" (UID: \"f230a61c-86f6-454b-84e6-43f1a5ac2ad9\") " pod="kube-system/kube-proxy-j8f5t" Oct 2 23:58:34.147484 kubelet[1543]: I1002 23:58:34.147427 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-run\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147484 kubelet[1543]: I1002 23:58:34.147444 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-cgroup\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147675 kubelet[1543]: I1002 23:58:34.147460 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-lib-modules\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147675 kubelet[1543]: I1002 23:58:34.147473 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-xtables-lock\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147675 kubelet[1543]: I1002 23:58:34.147484 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-hostproc\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147675 kubelet[1543]: I1002 23:58:34.147556 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-etc-cni-netd\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147675 kubelet[1543]: I1002 23:58:34.147568 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47bb3314-73da-477f-92b3-f06384a9e1c9-clustermesh-secrets\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147675 kubelet[1543]: I1002 23:58:34.147579 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-kernel\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147892 kubelet[1543]: I1002 23:58:34.147607 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f230a61c-86f6-454b-84e6-43f1a5ac2ad9-kube-proxy\") pod \"kube-proxy-j8f5t\" (UID: \"f230a61c-86f6-454b-84e6-43f1a5ac2ad9\") " pod="kube-system/kube-proxy-j8f5t" Oct 2 23:58:34.147892 kubelet[1543]: I1002 23:58:34.147640 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-bpf-maps\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147892 kubelet[1543]: I1002 23:58:34.147651 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-config-path\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147892 kubelet[1543]: I1002 23:58:34.147681 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-net\") pod \"cilium-nqz78\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " pod="kube-system/cilium-nqz78" Oct 2 23:58:34.147892 kubelet[1543]: I1002 23:58:34.147710 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f230a61c-86f6-454b-84e6-43f1a5ac2ad9-xtables-lock\") pod \"kube-proxy-j8f5t\" (UID: \"f230a61c-86f6-454b-84e6-43f1a5ac2ad9\") " pod="kube-system/kube-proxy-j8f5t" Oct 2 23:58:34.148054 kubelet[1543]: I1002 23:58:34.147742 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhsl9\" (UniqueName: \"kubernetes.io/projected/f230a61c-86f6-454b-84e6-43f1a5ac2ad9-kube-api-access-xhsl9\") pod \"kube-proxy-j8f5t\" (UID: \"f230a61c-86f6-454b-84e6-43f1a5ac2ad9\") " pod="kube-system/kube-proxy-j8f5t" Oct 2 23:58:34.148054 kubelet[1543]: I1002 23:58:34.147757 1543 reconciler.go:41] "Reconciler: start to sync state" Oct 2 23:58:34.169297 systemd[1]: Created slice kubepods-besteffort-podf230a61c_86f6_454b_84e6_43f1a5ac2ad9.slice. Oct 2 23:58:34.399156 sudo[1291]: pam_unix(sudo:session): session closed for user root Oct 2 23:58:34.398000 audit[1291]: USER_END pid=1291 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:34.399000 audit[1291]: CRED_DISP pid=1291 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 23:58:34.402035 sshd[1288]: pam_unix(sshd:session): session closed for user core Oct 2 23:58:34.404000 audit[1288]: USER_END pid=1288 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:34.404000 audit[1288]: CRED_DISP pid=1288 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 23:58:34.407826 systemd[1]: sshd@6-139.178.89.117:22-139.178.89.65:46394.service: Deactivated successfully. Oct 2 23:58:34.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.89.117:22-139.178.89.65:46394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 23:58:34.409614 systemd[1]: session-9.scope: Deactivated successfully. Oct 2 23:58:34.411429 systemd-logind[1142]: Session 9 logged out. Waiting for processes to exit. Oct 2 23:58:34.413646 systemd-logind[1142]: Removed session 9. Oct 2 23:58:34.470458 env[1152]: time="2023-10-02T23:58:34.470330693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqz78,Uid:47bb3314-73da-477f-92b3-f06384a9e1c9,Namespace:kube-system,Attempt:0,}" Oct 2 23:58:34.489591 env[1152]: time="2023-10-02T23:58:34.489450477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8f5t,Uid:f230a61c-86f6-454b-84e6-43f1a5ac2ad9,Namespace:kube-system,Attempt:0,}" Oct 2 23:58:35.128718 kubelet[1543]: E1002 23:58:35.128606 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:35.144576 env[1152]: time="2023-10-02T23:58:35.144528431Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.145753 env[1152]: time="2023-10-02T23:58:35.145689780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.146635 env[1152]: time="2023-10-02T23:58:35.146590016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.146981 env[1152]: time="2023-10-02T23:58:35.146941721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.147724 env[1152]: time="2023-10-02T23:58:35.147684655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.148884 env[1152]: time="2023-10-02T23:58:35.148845199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.149227 env[1152]: time="2023-10-02T23:58:35.149188090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.150099 env[1152]: time="2023-10-02T23:58:35.150060308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:35.157000 env[1152]: time="2023-10-02T23:58:35.156886109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 23:58:35.157000 env[1152]: time="2023-10-02T23:58:35.156937453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 23:58:35.157000 env[1152]: time="2023-10-02T23:58:35.156958662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 23:58:35.157135 env[1152]: time="2023-10-02T23:58:35.157038527Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003 pid=1611 runtime=io.containerd.runc.v2 Oct 2 23:58:35.157175 env[1152]: time="2023-10-02T23:58:35.157153427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 23:58:35.157175 env[1152]: time="2023-10-02T23:58:35.157169985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 23:58:35.157212 env[1152]: time="2023-10-02T23:58:35.157176625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 23:58:35.157244 env[1152]: time="2023-10-02T23:58:35.157229419Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd070a146bbd0e619724d6ea118170b04c70769d6edc94f0e545532b2288d279 pid=1613 runtime=io.containerd.runc.v2 Oct 2 23:58:35.162894 systemd[1]: Started cri-containerd-dd070a146bbd0e619724d6ea118170b04c70769d6edc94f0e545532b2288d279.scope. Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.168000 audit: BPF prog-id=62 op=LOAD Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=1613 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303730613134366262643065363139373234643665613131383137 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=1613 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303730613134366262643065363139373234643665613131383137 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit: BPF prog-id=63 op=LOAD Oct 2 23:58:35.169000 audit[1632]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000024820 items=0 ppid=1613 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303730613134366262643065363139373234643665613131383137 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit: BPF prog-id=64 op=LOAD Oct 2 23:58:35.169000 audit[1632]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c000024868 items=0 ppid=1613 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303730613134366262643065363139373234643665613131383137 Oct 2 23:58:35.169000 audit: BPF prog-id=64 op=UNLOAD Oct 2 23:58:35.169000 audit: BPF prog-id=63 op=UNLOAD Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { perfmon } for pid=1632 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit[1632]: AVC avc: denied { bpf } for pid=1632 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.169000 audit: BPF prog-id=65 op=LOAD Oct 2 23:58:35.169000 audit[1632]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c000024c78 items=0 ppid=1613 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303730613134366262643065363139373234643665613131383137 Oct 2 23:58:35.174166 systemd[1]: Started cri-containerd-9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003.scope. Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit: BPF prog-id=66 op=LOAD Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1611 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306631323530366366623264653864303430633637373263333862 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1611 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306631323530366366623264653864303430633637373263333862 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit: BPF prog-id=67 op=LOAD Oct 2 23:58:35.179000 audit[1630]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0001ddec0 items=0 ppid=1611 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306631323530366366623264653864303430633637373263333862 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit: BPF prog-id=68 op=LOAD Oct 2 23:58:35.179000 audit[1630]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0001ddf08 items=0 ppid=1611 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306631323530366366623264653864303430633637373263333862 Oct 2 23:58:35.179000 audit: BPF prog-id=68 op=UNLOAD Oct 2 23:58:35.179000 audit: BPF prog-id=67 op=UNLOAD Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { perfmon } for pid=1630 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit[1630]: AVC avc: denied { bpf } for pid=1630 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:35.179000 audit: BPF prog-id=69 op=LOAD Oct 2 23:58:35.179000 audit[1630]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00030c318 items=0 ppid=1611 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:35.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306631323530366366623264653864303430633637373263333862 Oct 2 23:58:35.186271 env[1152]: time="2023-10-02T23:58:35.186248308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8f5t,Uid:f230a61c-86f6-454b-84e6-43f1a5ac2ad9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd070a146bbd0e619724d6ea118170b04c70769d6edc94f0e545532b2288d279\"" Oct 2 23:58:35.187193 env[1152]: time="2023-10-02T23:58:35.187180262Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\"" Oct 2 23:58:35.197449 env[1152]: time="2023-10-02T23:58:35.197395642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqz78,Uid:47bb3314-73da-477f-92b3-f06384a9e1c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\"" Oct 2 23:58:35.269314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542698662.mount: Deactivated successfully. Oct 2 23:58:36.017747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170929380.mount: Deactivated successfully. Oct 2 23:58:36.129260 kubelet[1543]: E1002 23:58:36.129218 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:36.349852 env[1152]: time="2023-10-02T23:58:36.349785969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:36.350348 env[1152]: time="2023-10-02T23:58:36.350333600Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:36.350986 env[1152]: time="2023-10-02T23:58:36.350941492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:36.351908 env[1152]: time="2023-10-02T23:58:36.351863755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8e9eff2f6d0b398f9ac5f5a15c1cb7d5f468f28d64a78d593d57f72a969a54ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:36.352025 env[1152]: time="2023-10-02T23:58:36.351983201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\" returns image reference \"sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985\"" Oct 2 23:58:36.352354 env[1152]: time="2023-10-02T23:58:36.352342818Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 23:58:36.353205 env[1152]: time="2023-10-02T23:58:36.353189584Z" level=info msg="CreateContainer within sandbox \"dd070a146bbd0e619724d6ea118170b04c70769d6edc94f0e545532b2288d279\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 23:58:36.358967 env[1152]: time="2023-10-02T23:58:36.358948590Z" level=info msg="CreateContainer within sandbox \"dd070a146bbd0e619724d6ea118170b04c70769d6edc94f0e545532b2288d279\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d18be303edc2f00aaec2c14998f5db5a5cf1645e674ffbcd41a9666076c1d22\"" Oct 2 23:58:36.359298 env[1152]: time="2023-10-02T23:58:36.359286421Z" level=info msg="StartContainer for \"1d18be303edc2f00aaec2c14998f5db5a5cf1645e674ffbcd41a9666076c1d22\"" Oct 2 23:58:36.380510 systemd[1]: Started cri-containerd-1d18be303edc2f00aaec2c14998f5db5a5cf1645e674ffbcd41a9666076c1d22.scope. Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.393861 kernel: kauditd_printk_skb: 528 callbacks suppressed Oct 2 23:58:36.393916 kernel: audit: type=1400 audit(1696291116.387:556): avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001496b0 a2=3c a3=8 items=0 ppid=1613 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.531817 kernel: audit: type=1300 audit(1696291116.387:556): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001496b0 a2=3c a3=8 items=0 ppid=1613 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313862653330336564633266303061616563326331343939386635 Oct 2 23:58:36.532431 kernel: audit: type=1327 audit(1696291116.387:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313862653330336564633266303061616563326331343939386635 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.673566 kernel: audit: type=1400 audit(1696291116.387:557): avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.673603 kernel: audit: type=1400 audit(1696291116.387:557): avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.733840 kernel: audit: type=1400 audit(1696291116.387:557): avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.808907 env[1152]: time="2023-10-02T23:58:36.808886389Z" level=info msg="StartContainer for \"1d18be303edc2f00aaec2c14998f5db5a5cf1645e674ffbcd41a9666076c1d22\" returns successfully" Oct 2 23:58:36.858098 kernel: audit: type=1400 audit(1696291116.387:557): avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.858154 kernel: audit: type=1400 audit(1696291116.387:557): avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.983716 kernel: audit: type=1400 audit(1696291116.387:557): avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.983749 kernel: audit: type=1400 audit(1696291116.387:557): avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.387000 audit: BPF prog-id=70 op=LOAD Oct 2 23:58:36.387000 audit[1686]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001499d8 a2=78 a3=c0002d5be0 items=0 ppid=1613 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313862653330336564633266303061616563326331343939386635 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.447000 audit: BPF prog-id=71 op=LOAD Oct 2 23:58:36.447000 audit[1686]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000149770 a2=78 a3=c0002d5c28 items=0 ppid=1613 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313862653330336564633266303061616563326331343939386635 Oct 2 23:58:36.531000 audit: BPF prog-id=71 op=UNLOAD Oct 2 23:58:36.531000 audit: BPF prog-id=70 op=UNLOAD Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { perfmon } for pid=1686 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit[1686]: AVC avc: denied { bpf } for pid=1686 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 23:58:36.531000 audit: BPF prog-id=72 op=LOAD Oct 2 23:58:36.531000 audit[1686]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000149c30 a2=78 a3=c0002d5cb8 items=0 ppid=1613 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313862653330336564633266303061616563326331343939386635 Oct 2 23:58:36.842000 audit[1746]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:36.842000 audit[1746]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb8287b80 a2=0 a3=7fffb8287b6c items=0 ppid=1696 pid=1746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 23:58:36.842000 audit[1747]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=1747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:36.842000 audit[1747]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe442f87f0 a2=0 a3=7ffe442f87dc items=0 ppid=1696 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 23:58:36.842000 audit[1748]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_chain pid=1748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:36.842000 audit[1748]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9107e140 a2=0 a3=7ffd9107e12c items=0 ppid=1696 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 23:58:36.842000 audit[1749]: NETFILTER_CFG table=nat:17 family=10 entries=1 op=nft_register_chain pid=1749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:36.842000 audit[1749]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb57a2c70 a2=0 a3=7ffeb57a2c5c items=0 ppid=1696 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 23:58:36.843000 audit[1750]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_chain pid=1750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:36.843000 audit[1750]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2ce19a20 a2=0 a3=7ffe2ce19a0c items=0 ppid=1696 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 23:58:36.843000 audit[1751]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:36.843000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff57285180 a2=0 a3=7fff5728516c items=0 ppid=1696 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 23:58:36.950000 audit[1752]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:36.950000 audit[1752]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd9f604630 a2=0 a3=7ffd9f60461c items=0 ppid=1696 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 23:58:36.952000 audit[1754]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:36.952000 audit[1754]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff699593c0 a2=0 a3=7fff699593ac items=0 ppid=1696 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:36.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 23:58:37.048000 audit[1757]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.048000 audit[1757]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdb7480570 a2=0 a3=7ffdb748055c items=0 ppid=1696 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 23:58:37.049000 audit[1758]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.049000 audit[1758]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd5965f50 a2=0 a3=7ffcd5965f3c items=0 ppid=1696 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 23:58:37.050000 audit[1760]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.050000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8eeb3cc0 a2=0 a3=7ffd8eeb3cac items=0 ppid=1696 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 23:58:37.050000 audit[1761]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.050000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd830c50e0 a2=0 a3=7ffd830c50cc items=0 ppid=1696 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 23:58:37.052000 audit[1763]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.052000 audit[1763]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc177233d0 a2=0 a3=7ffc177233bc items=0 ppid=1696 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 23:58:37.054000 audit[1766]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.054000 audit[1766]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc9c3017e0 a2=0 a3=7ffc9c3017cc items=0 ppid=1696 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 23:58:37.054000 audit[1767]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.054000 audit[1767]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccc0c9b60 a2=0 a3=7ffccc0c9b4c items=0 ppid=1696 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 23:58:37.055000 audit[1769]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.055000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff0e85bee0 a2=0 a3=7fff0e85becc items=0 ppid=1696 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 23:58:37.056000 audit[1770]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.056000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcfe2f530 a2=0 a3=7fffcfe2f51c items=0 ppid=1696 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 23:58:37.057000 audit[1772]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.057000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc07fee8e0 a2=0 a3=7ffc07fee8cc items=0 ppid=1696 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 23:58:37.059000 audit[1775]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.059000 audit[1775]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff58e28a20 a2=0 a3=7fff58e28a0c items=0 ppid=1696 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 23:58:37.061000 audit[1778]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.061000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb559bab0 a2=0 a3=7fffb559ba9c items=0 ppid=1696 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 23:58:37.061000 audit[1779]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.061000 audit[1779]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6314caf0 a2=0 a3=7fff6314cadc items=0 ppid=1696 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 23:58:37.062000 audit[1781]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.062000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffffb7aa4c0 a2=0 a3=7ffffb7aa4ac items=0 ppid=1696 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 23:58:37.087000 audit[1786]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.087000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe3d3134a0 a2=0 a3=7ffe3d31348c items=0 ppid=1696 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 23:58:37.090000 audit[1791]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.090000 audit[1791]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa3cfb380 a2=0 a3=7fffa3cfb36c items=0 ppid=1696 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 23:58:37.091000 audit[1793]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 23:58:37.091000 audit[1793]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff8ef13780 a2=0 a3=7fff8ef1376c items=0 ppid=1696 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 23:58:37.098000 audit[1795]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 23:58:37.098000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7fffff720700 a2=0 a3=7fffff7206ec items=0 ppid=1696 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.098000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 23:58:37.120000 audit[1795]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 23:58:37.120000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffff720700 a2=0 a3=7fffff7206ec items=0 ppid=1696 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 23:58:37.121000 audit[1801]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.121000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd7f97fa70 a2=0 a3=7ffd7f97fa5c items=0 ppid=1696 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 23:58:37.123000 audit[1803]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.123000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe7c4477e0 a2=0 a3=7ffe7c4477cc items=0 ppid=1696 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 23:58:37.127000 audit[1806]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.127000 audit[1806]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff969deb90 a2=0 a3=7fff969deb7c items=0 ppid=1696 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 23:58:37.128000 audit[1807]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.128000 audit[1807]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe027aef30 a2=0 a3=7ffe027aef1c items=0 ppid=1696 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 23:58:37.130094 kubelet[1543]: E1002 23:58:37.130045 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:37.130000 audit[1809]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.130000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefe0ff9a0 a2=0 a3=7ffefe0ff98c items=0 ppid=1696 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 23:58:37.131000 audit[1810]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1810 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.131000 audit[1810]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7872b400 a2=0 a3=7fff7872b3ec items=0 ppid=1696 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 23:58:37.133000 audit[1812]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.133000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff7e142ab0 a2=0 a3=7fff7e142a9c items=0 ppid=1696 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 23:58:37.137000 audit[1815]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.137000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc30b25a60 a2=0 a3=7ffc30b25a4c items=0 ppid=1696 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 23:58:37.138000 audit[1816]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.138000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff088d55e0 a2=0 a3=7fff088d55cc items=0 ppid=1696 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.138000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 23:58:37.140000 audit[1818]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.140000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd70abf700 a2=0 a3=7ffd70abf6ec items=0 ppid=1696 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 23:58:37.141000 audit[1819]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.141000 audit[1819]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfb6f15c0 a2=0 a3=7ffcfb6f15ac items=0 ppid=1696 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 23:58:37.143000 audit[1821]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.143000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7fc7b9f0 a2=0 a3=7ffc7fc7b9dc items=0 ppid=1696 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 23:58:37.146000 audit[1824]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.146000 audit[1824]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd45933ff0 a2=0 a3=7ffd45933fdc items=0 ppid=1696 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 23:58:37.149000 audit[1827]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.149000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcec0412f0 a2=0 a3=7ffcec0412dc items=0 ppid=1696 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 23:58:37.150000 audit[1828]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1828 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.150000 audit[1828]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe460d2cc0 a2=0 a3=7ffe460d2cac items=0 ppid=1696 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 23:58:37.152000 audit[1830]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.152000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd6763c550 a2=0 a3=7ffd6763c53c items=0 ppid=1696 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 23:58:37.155000 audit[1833]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.155000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff02adc2b0 a2=0 a3=7fff02adc29c items=0 ppid=1696 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 23:58:37.156000 audit[1834]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.156000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf6dd8e50 a2=0 a3=7ffdf6dd8e3c items=0 ppid=1696 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 23:58:37.159000 audit[1836]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_rule pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.159000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdcd655eb0 a2=0 a3=7ffdcd655e9c items=0 ppid=1696 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 23:58:37.162000 audit[1839]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_rule pid=1839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.162000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff69bdab20 a2=0 a3=7fff69bdab0c items=0 ppid=1696 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 23:58:37.163000 audit[1840]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=1840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.163000 audit[1840]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc23a8ba10 a2=0 a3=7ffc23a8b9fc items=0 ppid=1696 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 23:58:37.166000 audit[1842]: NETFILTER_CFG table=nat:62 family=10 entries=2 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 23:58:37.166000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff87d378a0 a2=0 a3=7fff87d3788c items=0 ppid=1696 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 23:58:37.169000 audit[1844]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 23:58:37.169000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc83462140 a2=0 a3=7ffc8346212c items=0 ppid=1696 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.169000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 23:58:37.170000 audit[1844]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 23:58:37.170000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffc83462140 a2=0 a3=7ffc8346212c items=0 ppid=1696 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 23:58:37.170000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 23:58:37.288897 kubelet[1543]: I1002 23:58:37.288704 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j8f5t" podStartSLOduration=4.123337238 podCreationTimestamp="2023-10-02 23:58:32 +0000 UTC" firstStartedPulling="2023-10-02 23:58:35.186967231 +0000 UTC m=+2.329979292" lastFinishedPulling="2023-10-02 23:58:36.352250363 +0000 UTC m=+3.495262423" observedRunningTime="2023-10-02 23:58:37.28830897 +0000 UTC m=+4.431321094" watchObservedRunningTime="2023-10-02 23:58:37.288620369 +0000 UTC m=+4.431632514" Oct 2 23:58:38.130730 kubelet[1543]: E1002 23:58:38.130685 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:39.131332 kubelet[1543]: E1002 23:58:39.131287 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:39.747470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46103899.mount: Deactivated successfully. Oct 2 23:58:40.132189 kubelet[1543]: E1002 23:58:40.132070 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:41.132542 kubelet[1543]: E1002 23:58:41.132519 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:41.476089 env[1152]: time="2023-10-02T23:58:41.476037396Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:41.476702 env[1152]: time="2023-10-02T23:58:41.476678910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:41.477569 env[1152]: time="2023-10-02T23:58:41.477523579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 23:58:41.478240 env[1152]: time="2023-10-02T23:58:41.478197941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 23:58:41.479191 env[1152]: time="2023-10-02T23:58:41.479156093Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 23:58:41.483645 env[1152]: time="2023-10-02T23:58:41.483597224Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\"" Oct 2 23:58:41.483873 env[1152]: time="2023-10-02T23:58:41.483812390Z" level=info msg="StartContainer for \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\"" Oct 2 23:58:41.505381 systemd[1]: Started cri-containerd-d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58.scope. Oct 2 23:58:41.509766 systemd[1]: cri-containerd-d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58.scope: Deactivated successfully. Oct 2 23:58:41.510002 systemd[1]: Stopped cri-containerd-d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58.scope. Oct 2 23:58:42.133147 kubelet[1543]: E1002 23:58:42.133084 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:42.485081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58-rootfs.mount: Deactivated successfully. Oct 2 23:58:42.787529 env[1152]: time="2023-10-02T23:58:42.787258759Z" level=info msg="shim disconnected" id=d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58 Oct 2 23:58:42.787529 env[1152]: time="2023-10-02T23:58:42.787411735Z" level=warning msg="cleaning up after shim disconnected" id=d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58 namespace=k8s.io Oct 2 23:58:42.787529 env[1152]: time="2023-10-02T23:58:42.787447182Z" level=info msg="cleaning up dead shim" Oct 2 23:58:42.815075 env[1152]: time="2023-10-02T23:58:42.814944460Z" level=warning msg="cleanup warnings time=\"2023-10-02T23:58:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1869 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T23:58:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 23:58:42.815726 env[1152]: time="2023-10-02T23:58:42.815470558Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Oct 2 23:58:42.816013 env[1152]: time="2023-10-02T23:58:42.815856357Z" level=error msg="Failed to pipe stdout of container \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\"" error="reading from a closed fifo" Oct 2 23:58:42.816013 env[1152]: time="2023-10-02T23:58:42.815926519Z" level=error msg="Failed to pipe stderr of container \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\"" error="reading from a closed fifo" Oct 2 23:58:42.817694 env[1152]: time="2023-10-02T23:58:42.817533188Z" level=error msg="StartContainer for \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 23:58:42.818113 kubelet[1543]: E1002 23:58:42.818029 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58" Oct 2 23:58:42.818358 kubelet[1543]: E1002 23:58:42.818324 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 23:58:42.818358 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 23:58:42.818358 kubelet[1543]: rm /hostbin/cilium-mount Oct 2 23:58:42.818692 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ggbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 23:58:42.818692 kubelet[1543]: E1002 23:58:42.818476 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:58:43.134247 kubelet[1543]: E1002 23:58:43.134038 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:43.298340 env[1152]: time="2023-10-02T23:58:43.298250694Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 23:58:43.314929 env[1152]: time="2023-10-02T23:58:43.314891032Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\"" Oct 2 23:58:43.315228 env[1152]: time="2023-10-02T23:58:43.315190647Z" level=info msg="StartContainer for \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\"" Oct 2 23:58:43.335346 systemd[1]: Started cri-containerd-04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812.scope. Oct 2 23:58:43.340875 systemd[1]: cri-containerd-04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812.scope: Deactivated successfully. Oct 2 23:58:43.341037 systemd[1]: Stopped cri-containerd-04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812.scope. Oct 2 23:58:43.345313 env[1152]: time="2023-10-02T23:58:43.345277784Z" level=info msg="shim disconnected" id=04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812 Oct 2 23:58:43.345401 env[1152]: time="2023-10-02T23:58:43.345316910Z" level=warning msg="cleaning up after shim disconnected" id=04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812 namespace=k8s.io Oct 2 23:58:43.345401 env[1152]: time="2023-10-02T23:58:43.345324717Z" level=info msg="cleaning up dead shim" Oct 2 23:58:43.361424 env[1152]: time="2023-10-02T23:58:43.361393774Z" level=warning msg="cleanup warnings time=\"2023-10-02T23:58:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1905 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T23:58:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 23:58:43.361663 env[1152]: time="2023-10-02T23:58:43.361618331Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 23:58:43.361876 env[1152]: time="2023-10-02T23:58:43.361807728Z" level=error msg="Failed to pipe stdout of container \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\"" error="reading from a closed fifo" Oct 2 23:58:43.361876 env[1152]: time="2023-10-02T23:58:43.361819015Z" level=error msg="Failed to pipe stderr of container \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\"" error="reading from a closed fifo" Oct 2 23:58:43.362621 env[1152]: time="2023-10-02T23:58:43.362560874Z" level=error msg="StartContainer for \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 23:58:43.362815 kubelet[1543]: E1002 23:58:43.362762 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812" Oct 2 23:58:43.362903 kubelet[1543]: E1002 23:58:43.362879 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 23:58:43.362903 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 23:58:43.362903 kubelet[1543]: rm /hostbin/cilium-mount Oct 2 23:58:43.362903 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ggbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 23:58:43.363107 kubelet[1543]: E1002 23:58:43.362931 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:58:43.485842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812-rootfs.mount: Deactivated successfully. Oct 2 23:58:44.134720 kubelet[1543]: E1002 23:58:44.134649 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:44.298637 kubelet[1543]: I1002 23:58:44.298574 1543 scope.go:115] "RemoveContainer" containerID="d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58" Oct 2 23:58:44.299455 kubelet[1543]: I1002 23:58:44.299403 1543 scope.go:115] "RemoveContainer" containerID="d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58" Oct 2 23:58:44.301705 env[1152]: time="2023-10-02T23:58:44.301626417Z" level=info msg="RemoveContainer for \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\"" Oct 2 23:58:44.302558 env[1152]: time="2023-10-02T23:58:44.302268510Z" level=info msg="RemoveContainer for \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\"" Oct 2 23:58:44.302710 env[1152]: time="2023-10-02T23:58:44.302513467Z" level=error msg="RemoveContainer for \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\" failed" error="failed to set removing state for container \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\": container is already in removing state" Oct 2 23:58:44.302997 kubelet[1543]: E1002 23:58:44.302948 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\": container is already in removing state" containerID="d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58" Oct 2 23:58:44.303226 kubelet[1543]: E1002 23:58:44.303079 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58": container is already in removing state; Skipping pod "cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)" Oct 2 23:58:44.303883 kubelet[1543]: E1002 23:58:44.303838 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:58:44.305408 env[1152]: time="2023-10-02T23:58:44.305287007Z" level=info msg="RemoveContainer for \"d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58\" returns successfully" Oct 2 23:58:45.135005 kubelet[1543]: E1002 23:58:45.134885 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:45.305123 kubelet[1543]: E1002 23:58:45.305037 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:58:45.894800 kubelet[1543]: W1002 23:58:45.894694 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice/cri-containerd-d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58.scope WatchSource:0}: container "d9bfbae6c309b94198942a3a04bb8c88a026f2e1fec84043efa507c4d2b2ae58" in namespace "k8s.io": not found Oct 2 23:58:46.135295 kubelet[1543]: E1002 23:58:46.135235 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:47.136303 kubelet[1543]: E1002 23:58:47.136220 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:48.136918 kubelet[1543]: E1002 23:58:48.136813 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:49.005185 kubelet[1543]: W1002 23:58:49.005061 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice/cri-containerd-04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812.scope WatchSource:0}: task 04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812 not found: not found Oct 2 23:58:49.137776 kubelet[1543]: E1002 23:58:49.137653 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:50.138919 kubelet[1543]: E1002 23:58:50.138815 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:51.139789 kubelet[1543]: E1002 23:58:51.139685 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:50.904585 systemd-resolved[1096]: Clock change detected. Flushing caches. Oct 2 23:58:50.943605 systemd-journald[934]: Time jumped backwards, rotating. Oct 2 23:58:50.904844 systemd-timesyncd[1097]: Contacted time server [2620:149:a0c:4000::1f2]:123 (2.flatcar.pool.ntp.org). Oct 2 23:58:50.904966 systemd-timesyncd[1097]: Initial clock synchronization to Mon 2023-10-02 23:58:50.904443 UTC. Oct 2 23:58:51.679406 kubelet[1543]: E1002 23:58:51.679298 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:52.666141 kubelet[1543]: E1002 23:58:52.666032 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:52.680616 kubelet[1543]: E1002 23:58:52.680520 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:53.681408 kubelet[1543]: E1002 23:58:53.681287 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:54.681940 kubelet[1543]: E1002 23:58:54.681837 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:55.682311 kubelet[1543]: E1002 23:58:55.682190 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:56.682925 kubelet[1543]: E1002 23:58:56.682824 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:56.808721 env[1152]: time="2023-10-02T23:58:56.808632273Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 23:58:56.812760 env[1152]: time="2023-10-02T23:58:56.812711475Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\"" Oct 2 23:58:56.812899 env[1152]: time="2023-10-02T23:58:56.812862645Z" level=info msg="StartContainer for \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\"" Oct 2 23:58:56.841314 systemd[1]: Started cri-containerd-21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671.scope. Oct 2 23:58:56.846645 systemd[1]: cri-containerd-21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671.scope: Deactivated successfully. Oct 2 23:58:56.846810 systemd[1]: Stopped cri-containerd-21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671.scope. Oct 2 23:58:56.848719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671-rootfs.mount: Deactivated successfully. Oct 2 23:58:56.850238 env[1152]: time="2023-10-02T23:58:56.850206594Z" level=info msg="shim disconnected" id=21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671 Oct 2 23:58:56.850289 env[1152]: time="2023-10-02T23:58:56.850241100Z" level=warning msg="cleaning up after shim disconnected" id=21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671 namespace=k8s.io Oct 2 23:58:56.850289 env[1152]: time="2023-10-02T23:58:56.850247875Z" level=info msg="cleaning up dead shim" Oct 2 23:58:56.865880 env[1152]: time="2023-10-02T23:58:56.865849268Z" level=warning msg="cleanup warnings time=\"2023-10-02T23:58:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1942 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T23:58:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 23:58:56.866083 env[1152]: time="2023-10-02T23:58:56.866047292Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 23:58:56.866204 env[1152]: time="2023-10-02T23:58:56.866173887Z" level=error msg="Failed to pipe stdout of container \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\"" error="reading from a closed fifo" Oct 2 23:58:56.866246 env[1152]: time="2023-10-02T23:58:56.866222828Z" level=error msg="Failed to pipe stderr of container \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\"" error="reading from a closed fifo" Oct 2 23:58:56.867004 env[1152]: time="2023-10-02T23:58:56.866934441Z" level=error msg="StartContainer for \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 23:58:56.867133 kubelet[1543]: E1002 23:58:56.867086 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671" Oct 2 23:58:56.867189 kubelet[1543]: E1002 23:58:56.867155 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 23:58:56.867189 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 23:58:56.867189 kubelet[1543]: rm /hostbin/cilium-mount Oct 2 23:58:56.867189 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ggbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 23:58:56.867189 kubelet[1543]: E1002 23:58:56.867183 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:58:56.870843 kubelet[1543]: I1002 23:58:56.870830 1543 scope.go:115] "RemoveContainer" containerID="04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812" Oct 2 23:58:56.871106 kubelet[1543]: I1002 23:58:56.871071 1543 scope.go:115] "RemoveContainer" containerID="04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812" Oct 2 23:58:56.871432 env[1152]: time="2023-10-02T23:58:56.871413420Z" level=info msg="RemoveContainer for \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\"" Oct 2 23:58:56.871665 env[1152]: time="2023-10-02T23:58:56.871628415Z" level=info msg="RemoveContainer for \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\"" Oct 2 23:58:56.871729 env[1152]: time="2023-10-02T23:58:56.871707535Z" level=error msg="RemoveContainer for \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\" failed" error="failed to set removing state for container \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\": container is already in removing state" Oct 2 23:58:56.871809 kubelet[1543]: E1002 23:58:56.871800 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\": container is already in removing state" containerID="04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812" Oct 2 23:58:56.871849 kubelet[1543]: E1002 23:58:56.871822 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812": container is already in removing state; Skipping pod "cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)" Oct 2 23:58:56.872042 kubelet[1543]: E1002 23:58:56.872031 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:58:56.872600 env[1152]: time="2023-10-02T23:58:56.872559867Z" level=info msg="RemoveContainer for \"04d6dd9ff0e1c6e5a93b74ce7bc681c840d8be9a9c190f9ceefd2cd96b8ec812\" returns successfully" Oct 2 23:58:57.683401 kubelet[1543]: E1002 23:58:57.683281 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:58.683874 kubelet[1543]: E1002 23:58:58.683809 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:59.685130 kubelet[1543]: E1002 23:58:59.685020 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:58:59.961693 kubelet[1543]: W1002 23:58:59.959941 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice/cri-containerd-21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671.scope WatchSource:0}: task 21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671 not found: not found Oct 2 23:59:00.686296 kubelet[1543]: E1002 23:59:00.686187 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:01.686545 kubelet[1543]: E1002 23:59:01.686450 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:02.687165 kubelet[1543]: E1002 23:59:02.687107 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:03.688368 kubelet[1543]: E1002 23:59:03.688243 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:04.531015 update_engine[1144]: I1002 23:59:04.530884 1144 update_attempter.cc:505] Updating boot flags... Oct 2 23:59:04.689423 kubelet[1543]: E1002 23:59:04.689297 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:05.689582 kubelet[1543]: E1002 23:59:05.689507 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:06.690560 kubelet[1543]: E1002 23:59:06.690443 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:07.691406 kubelet[1543]: E1002 23:59:07.691291 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:08.692414 kubelet[1543]: E1002 23:59:08.692289 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:08.804077 kubelet[1543]: E1002 23:59:08.804005 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:59:09.692814 kubelet[1543]: E1002 23:59:09.692689 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:10.693193 kubelet[1543]: E1002 23:59:10.693110 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:11.693448 kubelet[1543]: E1002 23:59:11.693330 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:12.665759 kubelet[1543]: E1002 23:59:12.665685 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:12.693843 kubelet[1543]: E1002 23:59:12.693766 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:13.694033 kubelet[1543]: E1002 23:59:13.693929 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:14.695092 kubelet[1543]: E1002 23:59:14.694952 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:15.695467 kubelet[1543]: E1002 23:59:15.695348 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:16.696170 kubelet[1543]: E1002 23:59:16.696110 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:17.696618 kubelet[1543]: E1002 23:59:17.696542 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:18.697839 kubelet[1543]: E1002 23:59:18.697772 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:19.698648 kubelet[1543]: E1002 23:59:19.698584 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:20.698889 kubelet[1543]: E1002 23:59:20.698778 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:21.699751 kubelet[1543]: E1002 23:59:21.699630 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:21.809391 env[1152]: time="2023-10-02T23:59:21.809339597Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 23:59:21.813586 env[1152]: time="2023-10-02T23:59:21.813571399Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\"" Oct 2 23:59:21.813745 env[1152]: time="2023-10-02T23:59:21.813731292Z" level=info msg="StartContainer for \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\"" Oct 2 23:59:21.842296 systemd[1]: Started cri-containerd-1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1.scope. Oct 2 23:59:21.847065 systemd[1]: cri-containerd-1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1.scope: Deactivated successfully. Oct 2 23:59:21.847199 systemd[1]: Stopped cri-containerd-1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1.scope. Oct 2 23:59:21.850003 env[1152]: time="2023-10-02T23:59:21.849974004Z" level=info msg="shim disconnected" id=1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1 Oct 2 23:59:21.850059 env[1152]: time="2023-10-02T23:59:21.850003866Z" level=warning msg="cleaning up after shim disconnected" id=1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1 namespace=k8s.io Oct 2 23:59:21.850059 env[1152]: time="2023-10-02T23:59:21.850009342Z" level=info msg="cleaning up dead shim" Oct 2 23:59:21.865874 env[1152]: time="2023-10-02T23:59:21.865820804Z" level=warning msg="cleanup warnings time=\"2023-10-02T23:59:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1998 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T23:59:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 23:59:21.866044 env[1152]: time="2023-10-02T23:59:21.865964869Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 23:59:21.866156 env[1152]: time="2023-10-02T23:59:21.866103424Z" level=error msg="Failed to pipe stderr of container \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\"" error="reading from a closed fifo" Oct 2 23:59:21.866156 env[1152]: time="2023-10-02T23:59:21.866132579Z" level=error msg="Failed to pipe stdout of container \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\"" error="reading from a closed fifo" Oct 2 23:59:21.880238 env[1152]: time="2023-10-02T23:59:21.880112946Z" level=error msg="StartContainer for \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 23:59:21.880683 kubelet[1543]: E1002 23:59:21.880599 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1" Oct 2 23:59:21.880954 kubelet[1543]: E1002 23:59:21.880826 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 23:59:21.880954 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 23:59:21.880954 kubelet[1543]: rm /hostbin/cilium-mount Oct 2 23:59:21.880954 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ggbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 23:59:21.880954 kubelet[1543]: E1002 23:59:21.880928 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:59:21.935307 kubelet[1543]: I1002 23:59:21.935246 1543 scope.go:115] "RemoveContainer" containerID="21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671" Oct 2 23:59:21.935878 kubelet[1543]: I1002 23:59:21.935836 1543 scope.go:115] "RemoveContainer" containerID="21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671" Oct 2 23:59:21.938081 env[1152]: time="2023-10-02T23:59:21.938008348Z" level=info msg="RemoveContainer for \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\"" Oct 2 23:59:21.939138 env[1152]: time="2023-10-02T23:59:21.939064140Z" level=info msg="RemoveContainer for \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\"" Oct 2 23:59:21.939387 env[1152]: time="2023-10-02T23:59:21.939311441Z" level=error msg="RemoveContainer for \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\" failed" error="failed to set removing state for container \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\": container is already in removing state" Oct 2 23:59:21.939655 kubelet[1543]: E1002 23:59:21.939618 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\": container is already in removing state" containerID="21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671" Oct 2 23:59:21.939813 kubelet[1543]: E1002 23:59:21.939699 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671": container is already in removing state; Skipping pod "cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)" Oct 2 23:59:21.940450 kubelet[1543]: E1002 23:59:21.940412 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:59:21.941949 env[1152]: time="2023-10-02T23:59:21.941867905Z" level=info msg="RemoveContainer for \"21184fcb66e938686fc0854d3b88238e029ae3678ebac0663fcea40212ee4671\" returns successfully" Oct 2 23:59:22.700237 kubelet[1543]: E1002 23:59:22.700108 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:22.815798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1-rootfs.mount: Deactivated successfully. Oct 2 23:59:23.701217 kubelet[1543]: E1002 23:59:23.701148 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:24.701893 kubelet[1543]: E1002 23:59:24.701798 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:24.958835 kubelet[1543]: W1002 23:59:24.958657 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice/cri-containerd-1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1.scope WatchSource:0}: task 1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1 not found: not found Oct 2 23:59:25.702241 kubelet[1543]: E1002 23:59:25.702111 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:26.702954 kubelet[1543]: E1002 23:59:26.702835 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:27.703736 kubelet[1543]: E1002 23:59:27.703623 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:28.704311 kubelet[1543]: E1002 23:59:28.704258 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:29.704941 kubelet[1543]: E1002 23:59:29.704823 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:30.705650 kubelet[1543]: E1002 23:59:30.705485 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:31.706020 kubelet[1543]: E1002 23:59:31.705931 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:32.665841 kubelet[1543]: E1002 23:59:32.665771 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:32.707052 kubelet[1543]: E1002 23:59:32.706946 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:33.707944 kubelet[1543]: E1002 23:59:33.707840 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:34.708508 kubelet[1543]: E1002 23:59:34.708403 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:35.708751 kubelet[1543]: E1002 23:59:35.708634 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:36.708919 kubelet[1543]: E1002 23:59:36.708793 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:36.803142 kubelet[1543]: E1002 23:59:36.803048 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:59:37.709282 kubelet[1543]: E1002 23:59:37.709163 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:38.710398 kubelet[1543]: E1002 23:59:38.710287 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:39.711105 kubelet[1543]: E1002 23:59:39.710963 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:40.711660 kubelet[1543]: E1002 23:59:40.711540 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:41.711848 kubelet[1543]: E1002 23:59:41.711730 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:42.712400 kubelet[1543]: E1002 23:59:42.712235 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:43.713390 kubelet[1543]: E1002 23:59:43.713271 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:44.714542 kubelet[1543]: E1002 23:59:44.714428 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:45.715521 kubelet[1543]: E1002 23:59:45.715412 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:46.716144 kubelet[1543]: E1002 23:59:46.715963 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:47.717135 kubelet[1543]: E1002 23:59:47.717064 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:48.717354 kubelet[1543]: E1002 23:59:48.717285 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:49.717906 kubelet[1543]: E1002 23:59:49.717798 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:49.804147 kubelet[1543]: E1002 23:59:49.804084 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 2 23:59:50.718776 kubelet[1543]: E1002 23:59:50.718659 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:51.719120 kubelet[1543]: E1002 23:59:51.719044 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:52.665355 kubelet[1543]: E1002 23:59:52.665246 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:52.720309 kubelet[1543]: E1002 23:59:52.720213 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:53.720907 kubelet[1543]: E1002 23:59:53.720789 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:54.721142 kubelet[1543]: E1002 23:59:54.721029 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:55.721959 kubelet[1543]: E1002 23:59:55.721835 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:56.722738 kubelet[1543]: E1002 23:59:56.722593 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:57.722941 kubelet[1543]: E1002 23:59:57.722839 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:58.723554 kubelet[1543]: E1002 23:59:58.723455 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 23:59:59.724378 kubelet[1543]: E1002 23:59:59.724255 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:00.725406 kubelet[1543]: E1003 00:00:00.725287 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:01.726208 kubelet[1543]: E1003 00:00:01.726098 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:02.727163 kubelet[1543]: E1003 00:00:02.727065 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:03.728315 kubelet[1543]: E1003 00:00:03.728245 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:03.808218 env[1152]: time="2023-10-03T00:00:03.808079404Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 3 00:00:03.821220 env[1152]: time="2023-10-03T00:00:03.821139645Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\"" Oct 3 00:00:03.821497 env[1152]: time="2023-10-03T00:00:03.821437221Z" level=info msg="StartContainer for \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\"" Oct 3 00:00:03.823067 systemd[1]: Started logrotate.service. Oct 3 00:00:03.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 3 00:00:03.825991 systemd[1]: logrotate.service: Deactivated successfully. Oct 3 00:00:03.850821 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 3 00:00:03.850901 kernel: audit: type=1130 audit(1696291203.822:613): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 3 00:00:03.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 3 00:00:03.929476 systemd[1]: Started cri-containerd-b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c.scope. Oct 3 00:00:03.978946 kernel: audit: type=1131 audit(1696291203.825:614): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=logrotate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 3 00:00:03.985131 systemd[1]: cri-containerd-b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c.scope: Deactivated successfully. Oct 3 00:00:03.987210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c-rootfs.mount: Deactivated successfully. Oct 3 00:00:03.988403 env[1152]: time="2023-10-03T00:00:03.988350733Z" level=info msg="shim disconnected" id=b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c Oct 3 00:00:03.988403 env[1152]: time="2023-10-03T00:00:03.988381774Z" level=warning msg="cleaning up after shim disconnected" id=b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c namespace=k8s.io Oct 3 00:00:03.988403 env[1152]: time="2023-10-03T00:00:03.988387556Z" level=info msg="cleaning up dead shim" Oct 3 00:00:04.004909 env[1152]: time="2023-10-03T00:00:04.004859588Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:00:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2039 runtime=io.containerd.runc.v2\ntime=\"2023-10-03T00:00:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 3 00:00:04.005060 env[1152]: time="2023-10-03T00:00:04.004993918Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 3 00:00:04.005316 env[1152]: time="2023-10-03T00:00:04.005133204Z" level=error msg="Failed to pipe stderr of container \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\"" error="reading from a closed fifo" Oct 3 00:00:04.005493 env[1152]: time="2023-10-03T00:00:04.005367453Z" level=error msg="Failed to pipe stdout of container \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\"" error="reading from a closed fifo" Oct 3 00:00:04.005836 env[1152]: time="2023-10-03T00:00:04.005793616Z" level=error msg="StartContainer for \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 3 00:00:04.005965 kubelet[1543]: E1003 00:00:04.005954 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c" Oct 3 00:00:04.006059 kubelet[1543]: E1003 00:00:04.006022 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 3 00:00:04.006059 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 3 00:00:04.006059 kubelet[1543]: rm /hostbin/cilium-mount Oct 3 00:00:04.006059 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ggbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 3 00:00:04.006059 kubelet[1543]: E1003 00:00:04.006047 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:00:04.043232 kubelet[1543]: I1003 00:00:04.043183 1543 scope.go:115] "RemoveContainer" containerID="1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1" Oct 3 00:00:04.043427 kubelet[1543]: I1003 00:00:04.043388 1543 scope.go:115] "RemoveContainer" containerID="1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1" Oct 3 00:00:04.044071 env[1152]: time="2023-10-03T00:00:04.044013918Z" level=info msg="RemoveContainer for \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\"" Oct 3 00:00:04.044241 env[1152]: time="2023-10-03T00:00:04.044193241Z" level=info msg="RemoveContainer for \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\"" Oct 3 00:00:04.044293 env[1152]: time="2023-10-03T00:00:04.044258384Z" level=error msg="RemoveContainer for \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\" failed" error="failed to set removing state for container \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\": container is already in removing state" Oct 3 00:00:04.044395 kubelet[1543]: E1003 00:00:04.044357 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\": container is already in removing state" containerID="1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1" Oct 3 00:00:04.044395 kubelet[1543]: E1003 00:00:04.044378 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1": container is already in removing state; Skipping pod "cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)" Oct 3 00:00:04.044614 kubelet[1543]: E1003 00:00:04.044576 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:00:04.045229 env[1152]: time="2023-10-03T00:00:04.045184188Z" level=info msg="RemoveContainer for \"1e9978c4ec8ffd2095b6da95737f705f1d7712e297803413776353a797e396f1\" returns successfully" Oct 3 00:00:04.729387 kubelet[1543]: E1003 00:00:04.729283 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:05.730019 kubelet[1543]: E1003 00:00:05.729907 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:06.730546 kubelet[1543]: E1003 00:00:06.730445 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:07.095151 kubelet[1543]: W1003 00:00:07.095023 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice/cri-containerd-b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c.scope WatchSource:0}: task b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c not found: not found Oct 3 00:00:07.731273 kubelet[1543]: E1003 00:00:07.731167 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:08.731826 kubelet[1543]: E1003 00:00:08.731708 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:09.732949 kubelet[1543]: E1003 00:00:09.732826 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:10.734034 kubelet[1543]: E1003 00:00:10.733925 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:11.734946 kubelet[1543]: E1003 00:00:11.734827 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:12.666205 kubelet[1543]: E1003 00:00:12.666097 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:12.736087 kubelet[1543]: E1003 00:00:12.735961 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:13.736595 kubelet[1543]: E1003 00:00:13.736487 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:14.736703 kubelet[1543]: E1003 00:00:14.736601 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:15.737408 kubelet[1543]: E1003 00:00:15.737298 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:16.737965 kubelet[1543]: E1003 00:00:16.737857 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:16.804057 kubelet[1543]: E1003 00:00:16.803962 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:00:17.738288 kubelet[1543]: E1003 00:00:17.738175 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:18.739152 kubelet[1543]: E1003 00:00:18.739031 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:19.739302 kubelet[1543]: E1003 00:00:19.739189 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:20.739749 kubelet[1543]: E1003 00:00:20.739625 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:21.740450 kubelet[1543]: E1003 00:00:21.740337 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:22.741607 kubelet[1543]: E1003 00:00:22.741494 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:23.742090 kubelet[1543]: E1003 00:00:23.741955 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:24.742447 kubelet[1543]: E1003 00:00:24.742339 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:25.742965 kubelet[1543]: E1003 00:00:25.742851 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:26.743954 kubelet[1543]: E1003 00:00:26.743845 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:27.745093 kubelet[1543]: E1003 00:00:27.745015 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:28.745267 kubelet[1543]: E1003 00:00:28.745148 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:29.746387 kubelet[1543]: E1003 00:00:29.746320 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:30.746605 kubelet[1543]: E1003 00:00:30.746526 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:30.803812 kubelet[1543]: E1003 00:00:30.803750 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:00:31.747381 kubelet[1543]: E1003 00:00:31.747261 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:32.665810 kubelet[1543]: E1003 00:00:32.665732 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:32.714739 kubelet[1543]: E1003 00:00:32.714625 1543 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 3 00:00:32.747551 kubelet[1543]: E1003 00:00:32.747409 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:32.785268 kubelet[1543]: E1003 00:00:32.785200 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:00:33.748305 kubelet[1543]: E1003 00:00:33.748190 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:34.748687 kubelet[1543]: E1003 00:00:34.748574 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:35.749538 kubelet[1543]: E1003 00:00:35.749393 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:36.749842 kubelet[1543]: E1003 00:00:36.749732 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:37.750837 kubelet[1543]: E1003 00:00:37.750726 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:37.787508 kubelet[1543]: E1003 00:00:37.787418 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:00:38.751136 kubelet[1543]: E1003 00:00:38.751028 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:39.751696 kubelet[1543]: E1003 00:00:39.751591 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:40.752413 kubelet[1543]: E1003 00:00:40.752305 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:41.752778 kubelet[1543]: E1003 00:00:41.752664 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:41.804198 kubelet[1543]: E1003 00:00:41.804082 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:00:42.753667 kubelet[1543]: E1003 00:00:42.753545 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:42.788597 kubelet[1543]: E1003 00:00:42.788492 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:00:43.754397 kubelet[1543]: E1003 00:00:43.754265 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:44.754823 kubelet[1543]: E1003 00:00:44.754704 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:45.755373 kubelet[1543]: E1003 00:00:45.755267 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:46.756360 kubelet[1543]: E1003 00:00:46.756253 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:47.757319 kubelet[1543]: E1003 00:00:47.757202 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:47.790268 kubelet[1543]: E1003 00:00:47.790164 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:00:48.757810 kubelet[1543]: E1003 00:00:48.757695 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:49.759076 kubelet[1543]: E1003 00:00:49.758954 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:50.759721 kubelet[1543]: E1003 00:00:50.759615 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:51.760765 kubelet[1543]: E1003 00:00:51.760647 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:52.666420 kubelet[1543]: E1003 00:00:52.666297 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:52.761873 kubelet[1543]: E1003 00:00:52.761755 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:52.791306 kubelet[1543]: E1003 00:00:52.791212 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:00:53.762643 kubelet[1543]: E1003 00:00:53.762539 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:54.763097 kubelet[1543]: E1003 00:00:54.762958 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:55.763742 kubelet[1543]: E1003 00:00:55.763633 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:56.763938 kubelet[1543]: E1003 00:00:56.763798 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:56.804269 kubelet[1543]: E1003 00:00:56.804181 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:00:57.764884 kubelet[1543]: E1003 00:00:57.764775 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:57.792459 kubelet[1543]: E1003 00:00:57.792371 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:00:58.766075 kubelet[1543]: E1003 00:00:58.765999 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:00:59.766371 kubelet[1543]: E1003 00:00:59.766261 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:00.767384 kubelet[1543]: E1003 00:01:00.767278 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:01.768186 kubelet[1543]: E1003 00:01:01.768066 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:02.769278 kubelet[1543]: E1003 00:01:02.769175 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:02.793473 kubelet[1543]: E1003 00:01:02.793373 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:03.769735 kubelet[1543]: E1003 00:01:03.769626 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:04.770160 kubelet[1543]: E1003 00:01:04.770052 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:05.770901 kubelet[1543]: E1003 00:01:05.770795 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:06.772031 kubelet[1543]: E1003 00:01:06.771915 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:07.773095 kubelet[1543]: E1003 00:01:07.772971 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:07.794655 kubelet[1543]: E1003 00:01:07.794555 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:08.773776 kubelet[1543]: E1003 00:01:08.773668 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:09.774265 kubelet[1543]: E1003 00:01:09.774147 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:10.775401 kubelet[1543]: E1003 00:01:10.775285 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:11.775836 kubelet[1543]: E1003 00:01:11.775720 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:11.804293 kubelet[1543]: E1003 00:01:11.804186 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:01:12.666267 kubelet[1543]: E1003 00:01:12.666162 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:12.776885 kubelet[1543]: E1003 00:01:12.776772 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:12.795813 kubelet[1543]: E1003 00:01:12.795702 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:13.777889 kubelet[1543]: E1003 00:01:13.777784 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:14.778970 kubelet[1543]: E1003 00:01:14.778864 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:15.780035 kubelet[1543]: E1003 00:01:15.779931 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:16.780842 kubelet[1543]: E1003 00:01:16.780649 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:17.781908 kubelet[1543]: E1003 00:01:17.781791 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:17.797481 kubelet[1543]: E1003 00:01:17.797369 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:18.782788 kubelet[1543]: E1003 00:01:18.782677 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:19.783509 kubelet[1543]: E1003 00:01:19.783390 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:20.783788 kubelet[1543]: E1003 00:01:20.783675 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:21.784052 kubelet[1543]: E1003 00:01:21.783898 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:22.784912 kubelet[1543]: E1003 00:01:22.784808 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:22.798600 kubelet[1543]: E1003 00:01:22.798504 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:22.803288 kubelet[1543]: E1003 00:01:22.803212 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:01:23.785797 kubelet[1543]: E1003 00:01:23.785690 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:24.786805 kubelet[1543]: E1003 00:01:24.786698 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:25.787723 kubelet[1543]: E1003 00:01:25.787606 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:26.788632 kubelet[1543]: E1003 00:01:26.788533 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:27.789796 kubelet[1543]: E1003 00:01:27.789689 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:27.800365 kubelet[1543]: E1003 00:01:27.800266 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:28.790300 kubelet[1543]: E1003 00:01:28.790202 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:29.790458 kubelet[1543]: E1003 00:01:29.790350 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:30.791712 kubelet[1543]: E1003 00:01:30.791613 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:31.792732 kubelet[1543]: E1003 00:01:31.792634 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:32.666254 kubelet[1543]: E1003 00:01:32.666153 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:32.793615 kubelet[1543]: E1003 00:01:32.793506 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:32.801200 kubelet[1543]: E1003 00:01:32.801131 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:33.794520 kubelet[1543]: E1003 00:01:33.794413 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:34.794719 kubelet[1543]: E1003 00:01:34.794610 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:35.795410 kubelet[1543]: E1003 00:01:35.795306 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:36.796452 kubelet[1543]: E1003 00:01:36.796390 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:37.797218 kubelet[1543]: E1003 00:01:37.797140 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:37.802616 kubelet[1543]: E1003 00:01:37.802557 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:37.807082 env[1152]: time="2023-10-03T00:01:37.806934109Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 3 00:01:37.821261 env[1152]: time="2023-10-03T00:01:37.821178133Z" level=info msg="CreateContainer within sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\"" Oct 3 00:01:37.821550 env[1152]: time="2023-10-03T00:01:37.821462486Z" level=info msg="StartContainer for \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\"" Oct 3 00:01:37.844420 systemd[1]: Started cri-containerd-30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c.scope. Oct 3 00:01:37.849461 systemd[1]: cri-containerd-30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c.scope: Deactivated successfully. Oct 3 00:01:37.849652 systemd[1]: Stopped cri-containerd-30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c.scope. Oct 3 00:01:37.853242 env[1152]: time="2023-10-03T00:01:37.853206520Z" level=info msg="shim disconnected" id=30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c Oct 3 00:01:37.853333 env[1152]: time="2023-10-03T00:01:37.853245862Z" level=warning msg="cleaning up after shim disconnected" id=30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c namespace=k8s.io Oct 3 00:01:37.853333 env[1152]: time="2023-10-03T00:01:37.853258643Z" level=info msg="cleaning up dead shim" Oct 3 00:01:37.859147 env[1152]: time="2023-10-03T00:01:37.859087572Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:01:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2088 runtime=io.containerd.runc.v2\ntime=\"2023-10-03T00:01:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 3 00:01:37.859362 env[1152]: time="2023-10-03T00:01:37.859291052Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 3 00:01:37.859535 env[1152]: time="2023-10-03T00:01:37.859464615Z" level=error msg="Failed to pipe stdout of container \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\"" error="reading from a closed fifo" Oct 3 00:01:37.859535 env[1152]: time="2023-10-03T00:01:37.859486160Z" level=error msg="Failed to pipe stderr of container \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\"" error="reading from a closed fifo" Oct 3 00:01:37.860232 env[1152]: time="2023-10-03T00:01:37.860173767Z" level=error msg="StartContainer for \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 3 00:01:37.860383 kubelet[1543]: E1003 00:01:37.860360 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c" Oct 3 00:01:37.860467 kubelet[1543]: E1003 00:01:37.860448 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 3 00:01:37.860467 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 3 00:01:37.860467 kubelet[1543]: rm /hostbin/cilium-mount Oct 3 00:01:37.860467 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ggbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 3 00:01:37.860711 kubelet[1543]: E1003 00:01:37.860489 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:01:38.272440 kubelet[1543]: I1003 00:01:38.272384 1543 scope.go:115] "RemoveContainer" containerID="b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c" Oct 3 00:01:38.273200 kubelet[1543]: I1003 00:01:38.273124 1543 scope.go:115] "RemoveContainer" containerID="b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c" Oct 3 00:01:38.275045 env[1152]: time="2023-10-03T00:01:38.274953991Z" level=info msg="RemoveContainer for \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\"" Oct 3 00:01:38.276348 env[1152]: time="2023-10-03T00:01:38.276192261Z" level=info msg="RemoveContainer for \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\"" Oct 3 00:01:38.277188 env[1152]: time="2023-10-03T00:01:38.277008678Z" level=error msg="RemoveContainer for \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\" failed" error="failed to set removing state for container \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\": container is already in removing state" Oct 3 00:01:38.277903 kubelet[1543]: E1003 00:01:38.277840 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\": container is already in removing state" containerID="b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c" Oct 3 00:01:38.278175 kubelet[1543]: E1003 00:01:38.277953 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c": container is already in removing state; Skipping pod "cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)" Oct 3 00:01:38.278705 kubelet[1543]: E1003 00:01:38.278659 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-nqz78_kube-system(47bb3314-73da-477f-92b3-f06384a9e1c9)\"" pod="kube-system/cilium-nqz78" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 Oct 3 00:01:38.280755 env[1152]: time="2023-10-03T00:01:38.280712431Z" level=info msg="RemoveContainer for \"b14be80bb182d8f78ff6ea7cf92bfd6626117241cc6622516078ce96198d651c\" returns successfully" Oct 3 00:01:38.797475 kubelet[1543]: E1003 00:01:38.797364 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:38.819653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c-rootfs.mount: Deactivated successfully. Oct 3 00:01:39.798742 kubelet[1543]: E1003 00:01:39.798641 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:40.345698 env[1152]: time="2023-10-03T00:01:40.345564973Z" level=info msg="StopPodSandbox for \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\"" Oct 3 00:01:40.346566 env[1152]: time="2023-10-03T00:01:40.345710221Z" level=info msg="Container to stop \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 3 00:01:40.349749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003-shm.mount: Deactivated successfully. Oct 3 00:01:40.366055 systemd[1]: cri-containerd-9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003.scope: Deactivated successfully. Oct 3 00:01:40.365000 audit: BPF prog-id=66 op=UNLOAD Oct 3 00:01:40.396056 kernel: audit: type=1334 audit(1696291300.365:615): prog-id=66 op=UNLOAD Oct 3 00:01:40.400000 audit: BPF prog-id=69 op=UNLOAD Oct 3 00:01:40.431199 kernel: audit: type=1334 audit(1696291300.400:616): prog-id=69 op=UNLOAD Oct 3 00:01:40.444964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003-rootfs.mount: Deactivated successfully. Oct 3 00:01:40.446451 env[1152]: time="2023-10-03T00:01:40.446376762Z" level=info msg="shim disconnected" id=9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003 Oct 3 00:01:40.446451 env[1152]: time="2023-10-03T00:01:40.446426385Z" level=warning msg="cleaning up after shim disconnected" id=9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003 namespace=k8s.io Oct 3 00:01:40.446451 env[1152]: time="2023-10-03T00:01:40.446434739Z" level=info msg="cleaning up dead shim" Oct 3 00:01:40.462848 env[1152]: time="2023-10-03T00:01:40.462800618Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:01:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2118 runtime=io.containerd.runc.v2\n" Oct 3 00:01:40.462971 env[1152]: time="2023-10-03T00:01:40.462958628Z" level=info msg="TearDown network for sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" successfully" Oct 3 00:01:40.463002 env[1152]: time="2023-10-03T00:01:40.462972405Z" level=info msg="StopPodSandbox for \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" returns successfully" Oct 3 00:01:40.589529 kubelet[1543]: I1003 00:01:40.589455 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-cgroup\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589556 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-bpf-maps\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589621 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-net\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589630 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589693 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589730 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589681 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cni-path\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589755 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589887 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-lib-modules\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.589959 kubelet[1543]: I1003 00:01:40.589937 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590034 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-xtables-lock\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590032 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590147 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-hostproc\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590229 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590253 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-etc-cni-netd\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590356 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-kernel\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590387 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590432 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590478 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-hubble-tls\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590577 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-run\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590671 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590700 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ggbc\" (UniqueName: \"kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-kube-api-access-2ggbc\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590834 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47bb3314-73da-477f-92b3-f06384a9e1c9-clustermesh-secrets\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590907 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-config-path\") pod \"47bb3314-73da-477f-92b3-f06384a9e1c9\" (UID: \"47bb3314-73da-477f-92b3-f06384a9e1c9\") " Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.590975 1543 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cni-path\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.591601 kubelet[1543]: I1003 00:01:40.591040 1543 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-lib-modules\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591095 1543 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-xtables-lock\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591129 1543 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-hostproc\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591158 1543 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-etc-cni-netd\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591188 1543 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-kernel\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591217 1543 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-host-proc-sys-net\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591246 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-run\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591275 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-cgroup\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: I1003 00:01:40.591302 1543 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47bb3314-73da-477f-92b3-f06384a9e1c9-bpf-maps\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.593391 kubelet[1543]: W1003 00:01:40.591439 1543 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/47bb3314-73da-477f-92b3-f06384a9e1c9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 3 00:01:40.596489 kubelet[1543]: I1003 00:01:40.596248 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 3 00:01:40.596613 kubelet[1543]: I1003 00:01:40.596602 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 3 00:01:40.596711 kubelet[1543]: I1003 00:01:40.596697 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47bb3314-73da-477f-92b3-f06384a9e1c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 3 00:01:40.596812 kubelet[1543]: I1003 00:01:40.596799 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-kube-api-access-2ggbc" (OuterVolumeSpecName: "kube-api-access-2ggbc") pod "47bb3314-73da-477f-92b3-f06384a9e1c9" (UID: "47bb3314-73da-477f-92b3-f06384a9e1c9"). InnerVolumeSpecName "kube-api-access-2ggbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 3 00:01:40.597241 systemd[1]: var-lib-kubelet-pods-47bb3314\x2d73da\x2d477f\x2d92b3\x2df06384a9e1c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2ggbc.mount: Deactivated successfully. Oct 3 00:01:40.597305 systemd[1]: var-lib-kubelet-pods-47bb3314\x2d73da\x2d477f\x2d92b3\x2df06384a9e1c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 3 00:01:40.597343 systemd[1]: var-lib-kubelet-pods-47bb3314\x2d73da\x2d477f\x2d92b3\x2df06384a9e1c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 3 00:01:40.692428 kubelet[1543]: I1003 00:01:40.692321 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47bb3314-73da-477f-92b3-f06384a9e1c9-cilium-config-path\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.692428 kubelet[1543]: I1003 00:01:40.692395 1543 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2ggbc\" (UniqueName: \"kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-kube-api-access-2ggbc\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.692428 kubelet[1543]: I1003 00:01:40.692429 1543 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47bb3314-73da-477f-92b3-f06384a9e1c9-clustermesh-secrets\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.692865 kubelet[1543]: I1003 00:01:40.692461 1543 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47bb3314-73da-477f-92b3-f06384a9e1c9-hubble-tls\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:01:40.799888 kubelet[1543]: E1003 00:01:40.799770 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:40.815675 systemd[1]: Removed slice kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice. Oct 3 00:01:40.961314 kubelet[1543]: W1003 00:01:40.961083 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47bb3314_73da_477f_92b3_f06384a9e1c9.slice/cri-containerd-30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c.scope WatchSource:0}: task 30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c not found: not found Oct 3 00:01:41.286319 kubelet[1543]: I1003 00:01:41.286125 1543 scope.go:115] "RemoveContainer" containerID="30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c" Oct 3 00:01:41.288653 env[1152]: time="2023-10-03T00:01:41.288531947Z" level=info msg="RemoveContainer for \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\"" Oct 3 00:01:41.291776 env[1152]: time="2023-10-03T00:01:41.291686373Z" level=info msg="RemoveContainer for \"30ff90e102927eb8ead9c21ad9d9d305f736ec03fa3f7dfaf80bd462a855a39c\" returns successfully" Oct 3 00:01:41.800951 kubelet[1543]: E1003 00:01:41.800877 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:42.802178 kubelet[1543]: E1003 00:01:42.802104 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:42.803682 kubelet[1543]: E1003 00:01:42.803606 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:42.807215 kubelet[1543]: I1003 00:01:42.807142 1543 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=47bb3314-73da-477f-92b3-f06384a9e1c9 path="/var/lib/kubelet/pods/47bb3314-73da-477f-92b3-f06384a9e1c9/volumes" Oct 3 00:01:43.246911 kubelet[1543]: I1003 00:01:43.246803 1543 topology_manager.go:212] "Topology Admit Handler" Oct 3 00:01:43.246911 kubelet[1543]: E1003 00:01:43.246904 1543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.246911 kubelet[1543]: E1003 00:01:43.246933 1543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.247619 kubelet[1543]: E1003 00:01:43.246956 1543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.247619 kubelet[1543]: E1003 00:01:43.246975 1543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.247619 kubelet[1543]: I1003 00:01:43.247041 1543 memory_manager.go:346] "RemoveStaleState removing state" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.247619 kubelet[1543]: I1003 00:01:43.247064 1543 memory_manager.go:346] "RemoveStaleState removing state" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.247619 kubelet[1543]: I1003 00:01:43.247084 1543 memory_manager.go:346] "RemoveStaleState removing state" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.247619 kubelet[1543]: I1003 00:01:43.247101 1543 memory_manager.go:346] "RemoveStaleState removing state" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.248369 kubelet[1543]: I1003 00:01:43.247665 1543 topology_manager.go:212] "Topology Admit Handler" Oct 3 00:01:43.248369 kubelet[1543]: E1003 00:01:43.247763 1543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.248369 kubelet[1543]: E1003 00:01:43.247791 1543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.248369 kubelet[1543]: I1003 00:01:43.247835 1543 memory_manager.go:346] "RemoveStaleState removing state" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.248369 kubelet[1543]: I1003 00:01:43.247855 1543 memory_manager.go:346] "RemoveStaleState removing state" podUID="47bb3314-73da-477f-92b3-f06384a9e1c9" containerName="mount-cgroup" Oct 3 00:01:43.262247 systemd[1]: Created slice kubepods-besteffort-podd3ec8d45_fb7b_4832_bdc5_1db86de5f255.slice. Oct 3 00:01:43.268156 systemd[1]: Created slice kubepods-burstable-podc90b25bb_7bc8_4cf4_a68e_10accc164104.slice. Oct 3 00:01:43.308731 kubelet[1543]: I1003 00:01:43.308702 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-hostproc\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.308860 kubelet[1543]: I1003 00:01:43.308756 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-clustermesh-secrets\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.308860 kubelet[1543]: I1003 00:01:43.308783 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-net\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.308860 kubelet[1543]: I1003 00:01:43.308812 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cni-path\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309004 kubelet[1543]: I1003 00:01:43.308856 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-xtables-lock\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309004 kubelet[1543]: I1003 00:01:43.308893 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-ipsec-secrets\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309004 kubelet[1543]: I1003 00:01:43.308938 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k82jt\" (UniqueName: \"kubernetes.io/projected/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-kube-api-access-k82jt\") pod \"cilium-operator-574c4bb98d-fnblt\" (UID: \"d3ec8d45-fb7b-4832-bdc5-1db86de5f255\") " pod="kube-system/cilium-operator-574c4bb98d-fnblt" Oct 3 00:01:43.309004 kubelet[1543]: I1003 00:01:43.308995 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-bpf-maps\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309141 kubelet[1543]: I1003 00:01:43.309033 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tcr\" (UniqueName: \"kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-kube-api-access-q7tcr\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309141 kubelet[1543]: I1003 00:01:43.309061 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-hubble-tls\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309141 kubelet[1543]: I1003 00:01:43.309089 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-cilium-config-path\") pod \"cilium-operator-574c4bb98d-fnblt\" (UID: \"d3ec8d45-fb7b-4832-bdc5-1db86de5f255\") " pod="kube-system/cilium-operator-574c4bb98d-fnblt" Oct 3 00:01:43.309141 kubelet[1543]: I1003 00:01:43.309126 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-run\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309277 kubelet[1543]: I1003 00:01:43.309161 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-cgroup\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309277 kubelet[1543]: I1003 00:01:43.309196 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-etc-cni-netd\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309277 kubelet[1543]: I1003 00:01:43.309220 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-lib-modules\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309277 kubelet[1543]: I1003 00:01:43.309266 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-config-path\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.309390 kubelet[1543]: I1003 00:01:43.309301 1543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-kernel\") pod \"cilium-rskpt\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " pod="kube-system/cilium-rskpt" Oct 3 00:01:43.568804 env[1152]: time="2023-10-03T00:01:43.568672909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-fnblt,Uid:d3ec8d45-fb7b-4832-bdc5-1db86de5f255,Namespace:kube-system,Attempt:0,}" Oct 3 00:01:43.582454 env[1152]: time="2023-10-03T00:01:43.582373859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 3 00:01:43.582454 env[1152]: time="2023-10-03T00:01:43.582394678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 3 00:01:43.582454 env[1152]: time="2023-10-03T00:01:43.582402833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 3 00:01:43.582546 env[1152]: time="2023-10-03T00:01:43.582463253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5 pid=2145 runtime=io.containerd.runc.v2 Oct 3 00:01:43.588120 env[1152]: time="2023-10-03T00:01:43.588067752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rskpt,Uid:c90b25bb-7bc8-4cf4-a68e-10accc164104,Namespace:kube-system,Attempt:0,}" Oct 3 00:01:43.593525 env[1152]: time="2023-10-03T00:01:43.593438768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 3 00:01:43.593525 env[1152]: time="2023-10-03T00:01:43.593462799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 3 00:01:43.593525 env[1152]: time="2023-10-03T00:01:43.593470743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 3 00:01:43.593626 env[1152]: time="2023-10-03T00:01:43.593539257Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a pid=2168 runtime=io.containerd.runc.v2 Oct 3 00:01:43.601842 systemd[1]: Started cri-containerd-38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5.scope. Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.734127 kernel: audit: type=1400 audit(1696291303.608:617): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.734171 kernel: audit: type=1400 audit(1696291303.608:618): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.734187 kernel: audit: type=1400 audit(1696291303.608:619): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795907 kernel: audit: type=1400 audit(1696291303.608:620): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.797184 systemd[1]: Started cri-containerd-97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a.scope. Oct 3 00:01:43.802673 kubelet[1543]: E1003 00:01:43.802656 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.920301 kernel: audit: type=1400 audit(1696291303.608:621): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.920344 kernel: audit: type=1400 audit(1696291303.608:622): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.044598 kernel: audit: type=1400 audit(1696291303.608:623): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.044634 kernel: audit: type=1400 audit(1696291303.608:624): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.794000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.794000 audit: BPF prog-id=73 op=LOAD Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2145 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:43.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338363733633536373262643965663130353235313534363566373236 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2145 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:43.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338363733633536373262643965663130353235313534363566373236 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.795000 audit: BPF prog-id=74 op=LOAD Oct 3 00:01:43.795000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002b0b80 items=0 ppid=2145 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:43.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338363733633536373262643965663130353235313534363566373236 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit: BPF prog-id=75 op=LOAD Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2168 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:43.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646336383661326633663963333735353032663432663362316565 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=2168 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:43.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646336383661326633663963333735353032663432663362316565 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit: BPF prog-id=76 op=LOAD Oct 3 00:01:43.981000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0002b0bc8 items=0 ppid=2145 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:43.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338363733633536373262643965663130353235313534363566373236 Oct 3 00:01:44.105000 audit: BPF prog-id=76 op=UNLOAD Oct 3 00:01:44.105000 audit: BPF prog-id=74 op=UNLOAD Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:43.981000 audit: BPF prog-id=77 op=LOAD Oct 3 00:01:43.981000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c000187bb0 items=0 ppid=2168 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:43.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646336383661326633663963333735353032663432663362316565 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit: BPF prog-id=78 op=LOAD Oct 3 00:01:44.105000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit: BPF prog-id=79 op=LOAD Oct 3 00:01:44.105000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c000187bf8 items=0 ppid=2168 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:44.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646336383661326633663963333735353032663432663362316565 Oct 3 00:01:44.105000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0002b0fd8 items=0 ppid=2145 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:44.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338363733633536373262643965663130353235313534363566373236 Oct 3 00:01:44.105000 audit: BPF prog-id=78 op=UNLOAD Oct 3 00:01:44.105000 audit: BPF prog-id=77 op=UNLOAD Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { perfmon } for pid=2177 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit[2177]: AVC avc: denied { bpf } for pid=2177 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:44.105000 audit: BPF prog-id=80 op=LOAD Oct 3 00:01:44.105000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c0001b4008 items=0 ppid=2168 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:44.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646336383661326633663963333735353032663432663362316565 Oct 3 00:01:44.123725 env[1152]: time="2023-10-03T00:01:44.123699999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rskpt,Uid:c90b25bb-7bc8-4cf4-a68e-10accc164104,Namespace:kube-system,Attempt:0,} returns sandbox id \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\"" Oct 3 00:01:44.124841 env[1152]: time="2023-10-03T00:01:44.124826236Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 3 00:01:44.129666 env[1152]: time="2023-10-03T00:01:44.129650105Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\"" Oct 3 00:01:44.129836 env[1152]: time="2023-10-03T00:01:44.129797689Z" level=info msg="StartContainer for \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\"" Oct 3 00:01:44.135388 env[1152]: time="2023-10-03T00:01:44.135366497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-fnblt,Uid:d3ec8d45-fb7b-4832-bdc5-1db86de5f255,Namespace:kube-system,Attempt:0,} returns sandbox id \"38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5\"" Oct 3 00:01:44.136015 env[1152]: time="2023-10-03T00:01:44.136000163Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 3 00:01:44.148938 systemd[1]: Started cri-containerd-79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487.scope. Oct 3 00:01:44.154380 systemd[1]: cri-containerd-79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487.scope: Deactivated successfully. Oct 3 00:01:44.154531 systemd[1]: Stopped cri-containerd-79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487.scope. Oct 3 00:01:44.161797 env[1152]: time="2023-10-03T00:01:44.161770234Z" level=info msg="shim disconnected" id=79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487 Oct 3 00:01:44.161875 env[1152]: time="2023-10-03T00:01:44.161798887Z" level=warning msg="cleaning up after shim disconnected" id=79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487 namespace=k8s.io Oct 3 00:01:44.161875 env[1152]: time="2023-10-03T00:01:44.161807464Z" level=info msg="cleaning up dead shim" Oct 3 00:01:44.177926 env[1152]: time="2023-10-03T00:01:44.177868778Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:01:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2239 runtime=io.containerd.runc.v2\ntime=\"2023-10-03T00:01:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 3 00:01:44.178090 env[1152]: time="2023-10-03T00:01:44.178053553Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 3 00:01:44.178260 env[1152]: time="2023-10-03T00:01:44.178228486Z" level=error msg="Failed to pipe stdout of container \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\"" error="reading from a closed fifo" Oct 3 00:01:44.178313 env[1152]: time="2023-10-03T00:01:44.178239326Z" level=error msg="Failed to pipe stderr of container \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\"" error="reading from a closed fifo" Oct 3 00:01:44.178785 env[1152]: time="2023-10-03T00:01:44.178758317Z" level=error msg="StartContainer for \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 3 00:01:44.178920 kubelet[1543]: E1003 00:01:44.178906 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487" Oct 3 00:01:44.179039 kubelet[1543]: E1003 00:01:44.178996 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 3 00:01:44.179039 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 3 00:01:44.179039 kubelet[1543]: rm /hostbin/cilium-mount Oct 3 00:01:44.179039 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-q7tcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 3 00:01:44.179039 kubelet[1543]: E1003 00:01:44.179031 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:01:44.302276 env[1152]: time="2023-10-03T00:01:44.302186295Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 3 00:01:44.318406 env[1152]: time="2023-10-03T00:01:44.318304158Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\"" Oct 3 00:01:44.319365 env[1152]: time="2023-10-03T00:01:44.319247304Z" level=info msg="StartContainer for \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\"" Oct 3 00:01:44.365886 systemd[1]: Started cri-containerd-dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f.scope. Oct 3 00:01:44.387722 systemd[1]: cri-containerd-dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f.scope: Deactivated successfully. Oct 3 00:01:44.397531 env[1152]: time="2023-10-03T00:01:44.397379315Z" level=info msg="shim disconnected" id=dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f Oct 3 00:01:44.397531 env[1152]: time="2023-10-03T00:01:44.397493140Z" level=warning msg="cleaning up after shim disconnected" id=dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f namespace=k8s.io Oct 3 00:01:44.397531 env[1152]: time="2023-10-03T00:01:44.397520841Z" level=info msg="cleaning up dead shim" Oct 3 00:01:44.414331 env[1152]: time="2023-10-03T00:01:44.414170016Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:01:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2274 runtime=io.containerd.runc.v2\ntime=\"2023-10-03T00:01:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 3 00:01:44.415137 env[1152]: time="2023-10-03T00:01:44.414936930Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 3 00:01:44.415488 env[1152]: time="2023-10-03T00:01:44.415379032Z" level=error msg="Failed to pipe stdout of container \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\"" error="reading from a closed fifo" Oct 3 00:01:44.415710 env[1152]: time="2023-10-03T00:01:44.415539791Z" level=error msg="Failed to pipe stderr of container \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\"" error="reading from a closed fifo" Oct 3 00:01:44.417116 env[1152]: time="2023-10-03T00:01:44.416975594Z" level=error msg="StartContainer for \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 3 00:01:44.417519 kubelet[1543]: E1003 00:01:44.417434 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f" Oct 3 00:01:44.417755 kubelet[1543]: E1003 00:01:44.417632 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 3 00:01:44.417755 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 3 00:01:44.417755 kubelet[1543]: rm /hostbin/cilium-mount Oct 3 00:01:44.417755 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-q7tcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 3 00:01:44.417755 kubelet[1543]: E1003 00:01:44.417718 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:01:44.803292 kubelet[1543]: E1003 00:01:44.803188 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:45.299709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403096003.mount: Deactivated successfully. Oct 3 00:01:45.302127 kubelet[1543]: I1003 00:01:45.302082 1543 scope.go:115] "RemoveContainer" containerID="79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487" Oct 3 00:01:45.302370 kubelet[1543]: I1003 00:01:45.302326 1543 scope.go:115] "RemoveContainer" containerID="79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487" Oct 3 00:01:45.302912 env[1152]: time="2023-10-03T00:01:45.302883241Z" level=info msg="RemoveContainer for \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\"" Oct 3 00:01:45.303256 env[1152]: time="2023-10-03T00:01:45.303115474Z" level=info msg="RemoveContainer for \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\"" Oct 3 00:01:45.303256 env[1152]: time="2023-10-03T00:01:45.303190752Z" level=error msg="RemoveContainer for \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\" failed" error="failed to set removing state for container \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\": container is already in removing state" Oct 3 00:01:45.303360 kubelet[1543]: E1003 00:01:45.303311 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\": container is already in removing state" containerID="79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487" Oct 3 00:01:45.303360 kubelet[1543]: E1003 00:01:45.303341 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487": container is already in removing state; Skipping pod "cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)" Oct 3 00:01:45.303607 kubelet[1543]: E1003 00:01:45.303591 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:01:45.346367 env[1152]: time="2023-10-03T00:01:45.346237122Z" level=info msg="RemoveContainer for \"79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487\" returns successfully" Oct 3 00:01:45.789374 env[1152]: time="2023-10-03T00:01:45.789326179Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 3 00:01:45.789985 env[1152]: time="2023-10-03T00:01:45.789941143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 3 00:01:45.791032 env[1152]: time="2023-10-03T00:01:45.790974830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 3 00:01:45.791601 env[1152]: time="2023-10-03T00:01:45.791585651Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 3 00:01:45.792632 env[1152]: time="2023-10-03T00:01:45.792595647Z" level=info msg="CreateContainer within sandbox \"38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 3 00:01:45.798016 env[1152]: time="2023-10-03T00:01:45.797953122Z" level=info msg="CreateContainer within sandbox \"38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\"" Oct 3 00:01:45.798328 env[1152]: time="2023-10-03T00:01:45.798291759Z" level=info msg="StartContainer for \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\"" Oct 3 00:01:45.798397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3270974349.mount: Deactivated successfully. Oct 3 00:01:45.804212 kubelet[1543]: E1003 00:01:45.804170 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:45.820528 systemd[1]: Started cri-containerd-0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82.scope. Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.851952 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 3 00:01:45.851989 kernel: audit: type=1400 audit(1696291305.824:653): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.976144 kernel: audit: type=1400 audit(1696291305.824:654): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.976174 kernel: audit: type=1400 audit(1696291305.824:655): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.038222 kernel: audit: type=1400 audit(1696291305.824:656): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.100742 kernel: audit: type=1400 audit(1696291305.824:657): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.163394 kernel: audit: type=1400 audit(1696291305.824:658): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225981 kernel: audit: type=1400 audit(1696291305.824:659): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.288584 kernel: audit: type=1400 audit(1696291305.824:660): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.351130 kernel: audit: type=1400 audit(1696291305.824:661): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.413507 kernel: audit: type=1400 audit(1696291305.975:662): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit: BPF prog-id=81 op=LOAD Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2145 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:45.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333461356635363436323064336563363164386139396335633663 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2145 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:45.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333461356635363436323064336563363164386139396335633663 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:45.975000 audit: BPF prog-id=82 op=LOAD Oct 3 00:01:45.975000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002410f0 items=0 ppid=2145 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:45.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333461356635363436323064336563363164386139396335633663 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.099000 audit: BPF prog-id=83 op=LOAD Oct 3 00:01:46.099000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000241138 items=0 ppid=2145 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:46.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333461356635363436323064336563363164386139396335633663 Oct 3 00:01:46.225000 audit: BPF prog-id=83 op=UNLOAD Oct 3 00:01:46.225000 audit: BPF prog-id=82 op=UNLOAD Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 3 00:01:46.225000 audit: BPF prog-id=84 op=LOAD Oct 3 00:01:46.225000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000241548 items=0 ppid=2145 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 3 00:01:46.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333461356635363436323064336563363164386139396335633663 Oct 3 00:01:46.496213 env[1152]: time="2023-10-03T00:01:46.496163529Z" level=info msg="StartContainer for \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\" returns successfully" Oct 3 00:01:46.526000 audit[2303]: AVC avc: denied { map_create } for pid=2303 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c338,c963 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c338,c963 tclass=bpf permissive=0 Oct 3 00:01:46.526000 audit[2303]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00017f7d0 a2=48 a3=c00017f7c0 items=0 ppid=2145 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c338,c963 key=(null) Oct 3 00:01:46.526000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 3 00:01:46.805133 kubelet[1543]: E1003 00:01:46.804924 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:47.269149 kubelet[1543]: W1003 00:01:47.269034 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90b25bb_7bc8_4cf4_a68e_10accc164104.slice/cri-containerd-79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487.scope WatchSource:0}: container "79046827ba59be82a19633171e1650aa190f73a1df8b5f791a844e29f7e96487" in namespace "k8s.io": not found Oct 3 00:01:47.805203 kubelet[1543]: E1003 00:01:47.805101 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:47.806055 kubelet[1543]: E1003 00:01:47.805695 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:48.805738 kubelet[1543]: E1003 00:01:48.805636 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:49.806379 kubelet[1543]: E1003 00:01:49.806278 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:50.379768 kubelet[1543]: W1003 00:01:50.379648 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90b25bb_7bc8_4cf4_a68e_10accc164104.slice/cri-containerd-dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f.scope WatchSource:0}: task dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f not found: not found Oct 3 00:01:50.806811 kubelet[1543]: E1003 00:01:50.806591 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:51.807005 kubelet[1543]: E1003 00:01:51.806877 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:52.666154 kubelet[1543]: E1003 00:01:52.666053 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:52.806715 kubelet[1543]: E1003 00:01:52.806626 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:52.807118 kubelet[1543]: E1003 00:01:52.807072 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:53.808164 kubelet[1543]: E1003 00:01:53.808058 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:54.809321 kubelet[1543]: E1003 00:01:54.809229 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:55.810454 kubelet[1543]: E1003 00:01:55.810350 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:56.807401 env[1152]: time="2023-10-03T00:01:56.807316111Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 3 00:01:56.811398 kubelet[1543]: E1003 00:01:56.811308 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:56.822990 env[1152]: time="2023-10-03T00:01:56.822969699Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\"" Oct 3 00:01:56.823114 kubelet[1543]: I1003 00:01:56.823105 1543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-fnblt" podStartSLOduration=12.16718401 podCreationTimestamp="2023-10-03 00:01:43 +0000 UTC" firstStartedPulling="2023-10-03 00:01:44.135850929 +0000 UTC m=+191.740252300" lastFinishedPulling="2023-10-03 00:01:45.79175297 +0000 UTC m=+193.396154336" observedRunningTime="2023-10-03 00:01:47.326126167 +0000 UTC m=+194.930527615" watchObservedRunningTime="2023-10-03 00:01:56.823086046 +0000 UTC m=+204.427487413" Oct 3 00:01:56.823259 env[1152]: time="2023-10-03T00:01:56.823247501Z" level=info msg="StartContainer for \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\"" Oct 3 00:01:56.844263 systemd[1]: Started cri-containerd-ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032.scope. Oct 3 00:01:56.850120 systemd[1]: cri-containerd-ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032.scope: Deactivated successfully. Oct 3 00:01:56.850305 systemd[1]: Stopped cri-containerd-ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032.scope. Oct 3 00:01:56.852683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032-rootfs.mount: Deactivated successfully. Oct 3 00:01:56.988005 env[1152]: time="2023-10-03T00:01:56.987836064Z" level=info msg="shim disconnected" id=ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032 Oct 3 00:01:56.988005 env[1152]: time="2023-10-03T00:01:56.987948906Z" level=warning msg="cleaning up after shim disconnected" id=ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032 namespace=k8s.io Oct 3 00:01:56.988005 env[1152]: time="2023-10-03T00:01:56.987995258Z" level=info msg="cleaning up dead shim" Oct 3 00:01:57.015949 env[1152]: time="2023-10-03T00:01:57.015842989Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:01:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2358 runtime=io.containerd.runc.v2\ntime=\"2023-10-03T00:01:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 3 00:01:57.016600 env[1152]: time="2023-10-03T00:01:57.016425201Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 3 00:01:57.016995 env[1152]: time="2023-10-03T00:01:57.016873402Z" level=error msg="Failed to pipe stdout of container \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\"" error="reading from a closed fifo" Oct 3 00:01:57.017226 env[1152]: time="2023-10-03T00:01:57.016932975Z" level=error msg="Failed to pipe stderr of container \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\"" error="reading from a closed fifo" Oct 3 00:01:57.018487 env[1152]: time="2023-10-03T00:01:57.018367856Z" level=error msg="StartContainer for \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 3 00:01:57.018912 kubelet[1543]: E1003 00:01:57.018841 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032" Oct 3 00:01:57.019248 kubelet[1543]: E1003 00:01:57.019164 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 3 00:01:57.019248 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 3 00:01:57.019248 kubelet[1543]: rm /hostbin/cilium-mount Oct 3 00:01:57.019248 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-q7tcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 3 00:01:57.019836 kubelet[1543]: E1003 00:01:57.019270 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:01:57.341636 kubelet[1543]: I1003 00:01:57.341572 1543 scope.go:115] "RemoveContainer" containerID="dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f" Oct 3 00:01:57.342320 kubelet[1543]: I1003 00:01:57.342278 1543 scope.go:115] "RemoveContainer" containerID="dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f" Oct 3 00:01:57.344409 env[1152]: time="2023-10-03T00:01:57.344339501Z" level=info msg="RemoveContainer for \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\"" Oct 3 00:01:57.345398 env[1152]: time="2023-10-03T00:01:57.345335869Z" level=info msg="RemoveContainer for \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\"" Oct 3 00:01:57.345637 env[1152]: time="2023-10-03T00:01:57.345552054Z" level=error msg="RemoveContainer for \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\" failed" error="failed to set removing state for container \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\": container is already in removing state" Oct 3 00:01:57.345907 kubelet[1543]: E1003 00:01:57.345868 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\": container is already in removing state" containerID="dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f" Oct 3 00:01:57.346091 kubelet[1543]: E1003 00:01:57.345948 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f": container is already in removing state; Skipping pod "cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)" Oct 3 00:01:57.346705 kubelet[1543]: E1003 00:01:57.346670 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:01:57.348227 env[1152]: time="2023-10-03T00:01:57.348161332Z" level=info msg="RemoveContainer for \"dff3303531378ddd24ed36e34a5464f9fb80ce2f192a354f2be19df9a2ad972f\" returns successfully" Oct 3 00:01:57.808961 kubelet[1543]: E1003 00:01:57.808759 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:01:57.812345 kubelet[1543]: E1003 00:01:57.812251 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:58.813230 kubelet[1543]: E1003 00:01:58.813130 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:01:59.814434 kubelet[1543]: E1003 00:01:59.814323 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:00.095423 kubelet[1543]: W1003 00:02:00.095204 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90b25bb_7bc8_4cf4_a68e_10accc164104.slice/cri-containerd-ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032.scope WatchSource:0}: task ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032 not found: not found Oct 3 00:02:00.814828 kubelet[1543]: E1003 00:02:00.814725 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:01.815312 kubelet[1543]: E1003 00:02:01.815208 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:02.809765 kubelet[1543]: E1003 00:02:02.809694 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:02.816143 kubelet[1543]: E1003 00:02:02.816057 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:03.817066 kubelet[1543]: E1003 00:02:03.816951 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:04.818107 kubelet[1543]: E1003 00:02:04.817998 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:05.818340 kubelet[1543]: E1003 00:02:05.818233 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:06.819349 kubelet[1543]: E1003 00:02:06.819246 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:07.811238 kubelet[1543]: E1003 00:02:07.811173 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:07.820016 kubelet[1543]: E1003 00:02:07.819889 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:08.820585 kubelet[1543]: E1003 00:02:08.820475 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:09.821357 kubelet[1543]: E1003 00:02:09.821254 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:10.803631 kubelet[1543]: E1003 00:02:10.803528 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:02:10.822526 kubelet[1543]: E1003 00:02:10.822431 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:11.822801 kubelet[1543]: E1003 00:02:11.822693 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:12.665877 kubelet[1543]: E1003 00:02:12.665768 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:12.812107 kubelet[1543]: E1003 00:02:12.812002 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:12.823313 kubelet[1543]: E1003 00:02:12.823218 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:13.823657 kubelet[1543]: E1003 00:02:13.823551 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:14.824023 kubelet[1543]: E1003 00:02:14.823910 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:15.824758 kubelet[1543]: E1003 00:02:15.824653 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:16.825211 kubelet[1543]: E1003 00:02:16.825106 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:17.813345 kubelet[1543]: E1003 00:02:17.813275 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:17.826153 kubelet[1543]: E1003 00:02:17.826050 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:18.826700 kubelet[1543]: E1003 00:02:18.826598 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:19.827783 kubelet[1543]: E1003 00:02:19.827674 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:20.828225 kubelet[1543]: E1003 00:02:20.828115 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:21.829427 kubelet[1543]: E1003 00:02:21.829322 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:22.807594 env[1152]: time="2023-10-03T00:02:22.807531523Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 3 00:02:22.811592 env[1152]: time="2023-10-03T00:02:22.811545294Z" level=info msg="CreateContainer within sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\"" Oct 3 00:02:22.811769 env[1152]: time="2023-10-03T00:02:22.811727623Z" level=info msg="StartContainer for \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\"" Oct 3 00:02:22.814065 kubelet[1543]: E1003 00:02:22.814027 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:22.830257 kubelet[1543]: E1003 00:02:22.830211 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:22.833600 systemd[1]: Started cri-containerd-5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd.scope. Oct 3 00:02:22.839791 systemd[1]: cri-containerd-5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd.scope: Deactivated successfully. Oct 3 00:02:22.840001 systemd[1]: Stopped cri-containerd-5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd.scope. Oct 3 00:02:22.842761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd-rootfs.mount: Deactivated successfully. Oct 3 00:02:22.861382 env[1152]: time="2023-10-03T00:02:22.861302672Z" level=info msg="shim disconnected" id=5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd Oct 3 00:02:22.861382 env[1152]: time="2023-10-03T00:02:22.861378923Z" level=warning msg="cleaning up after shim disconnected" id=5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd namespace=k8s.io Oct 3 00:02:22.861665 env[1152]: time="2023-10-03T00:02:22.861396567Z" level=info msg="cleaning up dead shim" Oct 3 00:02:22.883901 env[1152]: time="2023-10-03T00:02:22.883879597Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:02:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2396 runtime=io.containerd.runc.v2\ntime=\"2023-10-03T00:02:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 3 00:02:22.884072 env[1152]: time="2023-10-03T00:02:22.884016608Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 3 00:02:22.884151 env[1152]: time="2023-10-03T00:02:22.884121437Z" level=error msg="Failed to pipe stdout of container \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\"" error="reading from a closed fifo" Oct 3 00:02:22.884188 env[1152]: time="2023-10-03T00:02:22.884141372Z" level=error msg="Failed to pipe stderr of container \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\"" error="reading from a closed fifo" Oct 3 00:02:22.884925 env[1152]: time="2023-10-03T00:02:22.884870159Z" level=error msg="StartContainer for \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 3 00:02:22.885010 kubelet[1543]: E1003 00:02:22.884997 1543 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd" Oct 3 00:02:22.885071 kubelet[1543]: E1003 00:02:22.885065 1543 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 3 00:02:22.885071 kubelet[1543]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 3 00:02:22.885071 kubelet[1543]: rm /hostbin/cilium-mount Oct 3 00:02:22.885071 kubelet[1543]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-q7tcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 3 00:02:22.885187 kubelet[1543]: E1003 00:02:22.885089 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:02:23.415483 kubelet[1543]: I1003 00:02:23.415387 1543 scope.go:115] "RemoveContainer" containerID="ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032" Oct 3 00:02:23.416123 kubelet[1543]: I1003 00:02:23.416046 1543 scope.go:115] "RemoveContainer" containerID="ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032" Oct 3 00:02:23.418193 env[1152]: time="2023-10-03T00:02:23.418077487Z" level=info msg="RemoveContainer for \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\"" Oct 3 00:02:23.419175 env[1152]: time="2023-10-03T00:02:23.419056410Z" level=info msg="RemoveContainer for \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\"" Oct 3 00:02:23.419388 env[1152]: time="2023-10-03T00:02:23.419280970Z" level=error msg="RemoveContainer for \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\" failed" error="failed to set removing state for container \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\": container is already in removing state" Oct 3 00:02:23.419696 kubelet[1543]: E1003 00:02:23.419613 1543 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\": container is already in removing state" containerID="ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032" Oct 3 00:02:23.419696 kubelet[1543]: E1003 00:02:23.419700 1543 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032": container is already in removing state; Skipping pod "cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)" Oct 3 00:02:23.420413 kubelet[1543]: E1003 00:02:23.420342 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:02:23.422768 env[1152]: time="2023-10-03T00:02:23.422670240Z" level=info msg="RemoveContainer for \"ae4acc251c4ef67fc6e83a7c30bfa59d424fd13f25580942ad069a35e149f032\" returns successfully" Oct 3 00:02:23.831017 kubelet[1543]: E1003 00:02:23.830916 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:24.831223 kubelet[1543]: E1003 00:02:24.831114 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:25.831786 kubelet[1543]: E1003 00:02:25.831678 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:25.968038 kubelet[1543]: W1003 00:02:25.967921 1543 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90b25bb_7bc8_4cf4_a68e_10accc164104.slice/cri-containerd-5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd.scope WatchSource:0}: task 5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd not found: not found Oct 3 00:02:26.832182 kubelet[1543]: E1003 00:02:26.832118 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:27.815831 kubelet[1543]: E1003 00:02:27.815727 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:27.832465 kubelet[1543]: E1003 00:02:27.832365 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:28.833379 kubelet[1543]: E1003 00:02:28.833268 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:29.833838 kubelet[1543]: E1003 00:02:29.833737 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:30.834341 kubelet[1543]: E1003 00:02:30.834236 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:31.834727 kubelet[1543]: E1003 00:02:31.834627 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:32.665812 kubelet[1543]: E1003 00:02:32.665710 1543 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:32.698641 env[1152]: time="2023-10-03T00:02:32.698502894Z" level=info msg="StopPodSandbox for \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\"" Oct 3 00:02:32.699428 env[1152]: time="2023-10-03T00:02:32.698734888Z" level=info msg="TearDown network for sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" successfully" Oct 3 00:02:32.699428 env[1152]: time="2023-10-03T00:02:32.698837990Z" level=info msg="StopPodSandbox for \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" returns successfully" Oct 3 00:02:32.699861 env[1152]: time="2023-10-03T00:02:32.699753013Z" level=info msg="RemovePodSandbox for \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\"" Oct 3 00:02:32.700078 env[1152]: time="2023-10-03T00:02:32.699826146Z" level=info msg="Forcibly stopping sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\"" Oct 3 00:02:32.700078 env[1152]: time="2023-10-03T00:02:32.700020581Z" level=info msg="TearDown network for sandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" successfully" Oct 3 00:02:32.703528 env[1152]: time="2023-10-03T00:02:32.703457808Z" level=info msg="RemovePodSandbox \"9c0f12506cfb2de8d040c6772c38bf473f44f99a6083ba2d050c3650e0e28003\" returns successfully" Oct 3 00:02:32.817081 kubelet[1543]: E1003 00:02:32.817019 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:32.835342 kubelet[1543]: E1003 00:02:32.835270 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:33.836152 kubelet[1543]: E1003 00:02:33.836081 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:34.837108 kubelet[1543]: E1003 00:02:34.837034 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:35.837276 kubelet[1543]: E1003 00:02:35.837196 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:36.837561 kubelet[1543]: E1003 00:02:36.837480 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:37.818789 kubelet[1543]: E1003 00:02:37.818724 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:37.838135 kubelet[1543]: E1003 00:02:37.838025 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:38.803313 kubelet[1543]: E1003 00:02:38.803210 1543 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-rskpt_kube-system(c90b25bb-7bc8-4cf4-a68e-10accc164104)\"" pod="kube-system/cilium-rskpt" podUID=c90b25bb-7bc8-4cf4-a68e-10accc164104 Oct 3 00:02:38.839232 kubelet[1543]: E1003 00:02:38.839119 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:39.839667 kubelet[1543]: E1003 00:02:39.839558 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:40.840735 kubelet[1543]: E1003 00:02:40.840634 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:41.841195 kubelet[1543]: E1003 00:02:41.841093 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:42.820307 kubelet[1543]: E1003 00:02:42.820253 1543 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 3 00:02:42.841418 kubelet[1543]: E1003 00:02:42.841310 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:43.842378 kubelet[1543]: E1003 00:02:43.842265 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:44.567454 env[1152]: time="2023-10-03T00:02:44.567321845Z" level=info msg="StopPodSandbox for \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\"" Oct 3 00:02:44.568624 env[1152]: time="2023-10-03T00:02:44.567475383Z" level=info msg="Container to stop \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 3 00:02:44.570173 env[1152]: time="2023-10-03T00:02:44.570089011Z" level=info msg="StopContainer for \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\" with timeout 30 (s)" Oct 3 00:02:44.570961 env[1152]: time="2023-10-03T00:02:44.570866442Z" level=info msg="Stop container \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\" with signal terminated" Oct 3 00:02:44.572143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a-shm.mount: Deactivated successfully. Oct 3 00:02:44.577561 systemd[1]: cri-containerd-0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82.scope: Deactivated successfully. Oct 3 00:02:44.576000 audit: BPF prog-id=81 op=UNLOAD Oct 3 00:02:44.604757 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 3 00:02:44.604810 kernel: audit: type=1334 audit(1696291364.576:672): prog-id=81 op=UNLOAD Oct 3 00:02:44.611174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82-rootfs.mount: Deactivated successfully. Oct 3 00:02:44.633650 systemd[1]: cri-containerd-97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a.scope: Deactivated successfully. Oct 3 00:02:44.632000 audit: BPF prog-id=75 op=UNLOAD Oct 3 00:02:44.661888 kernel: audit: type=1334 audit(1696291364.632:673): prog-id=75 op=UNLOAD Oct 3 00:02:44.662061 env[1152]: time="2023-10-03T00:02:44.662010832Z" level=info msg="shim disconnected" id=0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82 Oct 3 00:02:44.662061 env[1152]: time="2023-10-03T00:02:44.662037545Z" level=warning msg="cleaning up after shim disconnected" id=0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82 namespace=k8s.io Oct 3 00:02:44.662061 env[1152]: time="2023-10-03T00:02:44.662043428Z" level=info msg="cleaning up dead shim" Oct 3 00:02:44.674000 audit: BPF prog-id=84 op=UNLOAD Oct 3 00:02:44.676758 env[1152]: time="2023-10-03T00:02:44.676694178Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:02:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2443 runtime=io.containerd.runc.v2\n" Oct 3 00:02:44.677728 env[1152]: time="2023-10-03T00:02:44.677679561Z" level=info msg="StopContainer for \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\" returns successfully" Oct 3 00:02:44.678060 env[1152]: time="2023-10-03T00:02:44.678042123Z" level=info msg="StopPodSandbox for \"38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5\"" Oct 3 00:02:44.678096 env[1152]: time="2023-10-03T00:02:44.678072272Z" level=info msg="Container to stop \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 3 00:02:44.674000 audit: BPF prog-id=80 op=UNLOAD Oct 3 00:02:44.703016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5-shm.mount: Deactivated successfully. Oct 3 00:02:44.717015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a-rootfs.mount: Deactivated successfully. Oct 3 00:02:44.728893 kernel: audit: type=1334 audit(1696291364.674:674): prog-id=84 op=UNLOAD Oct 3 00:02:44.728921 kernel: audit: type=1334 audit(1696291364.674:675): prog-id=80 op=UNLOAD Oct 3 00:02:44.746302 systemd[1]: cri-containerd-38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5.scope: Deactivated successfully. Oct 3 00:02:44.745000 audit: BPF prog-id=73 op=UNLOAD Oct 3 00:02:44.773053 kernel: audit: type=1334 audit(1696291364.745:676): prog-id=73 op=UNLOAD Oct 3 00:02:44.778900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5-rootfs.mount: Deactivated successfully. Oct 3 00:02:44.779309 env[1152]: time="2023-10-03T00:02:44.779250539Z" level=info msg="shim disconnected" id=38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5 Oct 3 00:02:44.779309 env[1152]: time="2023-10-03T00:02:44.779285281Z" level=warning msg="cleaning up after shim disconnected" id=38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5 namespace=k8s.io Oct 3 00:02:44.779309 env[1152]: time="2023-10-03T00:02:44.779293539Z" level=info msg="cleaning up dead shim" Oct 3 00:02:44.782711 env[1152]: time="2023-10-03T00:02:44.782662217Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:02:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2478 runtime=io.containerd.runc.v2\n" Oct 3 00:02:44.782842 env[1152]: time="2023-10-03T00:02:44.782806242Z" level=info msg="TearDown network for sandbox \"38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5\" successfully" Oct 3 00:02:44.782842 env[1152]: time="2023-10-03T00:02:44.782818023Z" level=info msg="StopPodSandbox for \"38673c5672bd9ef1052515465f726e379cbeb158feda30ca7e72301ef2dc03e5\" returns successfully" Oct 3 00:02:44.782000 audit: BPF prog-id=79 op=UNLOAD Oct 3 00:02:44.808984 kernel: audit: type=1334 audit(1696291364.782:677): prog-id=79 op=UNLOAD Oct 3 00:02:44.810349 env[1152]: time="2023-10-03T00:02:44.810300092Z" level=info msg="shim disconnected" id=97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a Oct 3 00:02:44.810349 env[1152]: time="2023-10-03T00:02:44.810320087Z" level=warning msg="cleaning up after shim disconnected" id=97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a namespace=k8s.io Oct 3 00:02:44.810349 env[1152]: time="2023-10-03T00:02:44.810325754Z" level=info msg="cleaning up dead shim" Oct 3 00:02:44.814216 env[1152]: time="2023-10-03T00:02:44.814162451Z" level=warning msg="cleanup warnings time=\"2023-10-03T00:02:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2490 runtime=io.containerd.runc.v2\n" Oct 3 00:02:44.814366 env[1152]: time="2023-10-03T00:02:44.814325050Z" level=info msg="TearDown network for sandbox \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" successfully" Oct 3 00:02:44.814366 env[1152]: time="2023-10-03T00:02:44.814337856Z" level=info msg="StopPodSandbox for \"97dc686a2f3f9c375502f42f3b1eeaf7f1aa3ef94480ba30559c451e3c42dc7a\" returns successfully" Oct 3 00:02:44.842683 kubelet[1543]: E1003 00:02:44.842623 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 3 00:02:44.863149 kubelet[1543]: I1003 00:02:44.863097 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k82jt\" (UniqueName: \"kubernetes.io/projected/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-kube-api-access-k82jt\") pod \"d3ec8d45-fb7b-4832-bdc5-1db86de5f255\" (UID: \"d3ec8d45-fb7b-4832-bdc5-1db86de5f255\") " Oct 3 00:02:44.863149 kubelet[1543]: I1003 00:02:44.863142 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-cilium-config-path\") pod \"d3ec8d45-fb7b-4832-bdc5-1db86de5f255\" (UID: \"d3ec8d45-fb7b-4832-bdc5-1db86de5f255\") " Oct 3 00:02:44.863293 kubelet[1543]: W1003 00:02:44.863268 1543 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d3ec8d45-fb7b-4832-bdc5-1db86de5f255/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 3 00:02:44.864639 kubelet[1543]: I1003 00:02:44.864598 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3ec8d45-fb7b-4832-bdc5-1db86de5f255" (UID: "d3ec8d45-fb7b-4832-bdc5-1db86de5f255"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 3 00:02:44.865077 kubelet[1543]: I1003 00:02:44.865030 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-kube-api-access-k82jt" (OuterVolumeSpecName: "kube-api-access-k82jt") pod "d3ec8d45-fb7b-4832-bdc5-1db86de5f255" (UID: "d3ec8d45-fb7b-4832-bdc5-1db86de5f255"). InnerVolumeSpecName "kube-api-access-k82jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 3 00:02:44.963912 kubelet[1543]: I1003 00:02:44.963804 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-etc-cni-netd\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.963912 kubelet[1543]: I1003 00:02:44.963905 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-xtables-lock\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964007 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-ipsec-secrets\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964004 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964076 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-clustermesh-secrets\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964061 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964140 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-hubble-tls\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964204 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7tcr\" (UniqueName: \"kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-kube-api-access-q7tcr\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964269 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-lib-modules\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.964355 kubelet[1543]: I1003 00:02:44.964326 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-net\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964381 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cni-path\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964435 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-run\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964438 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964494 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-cgroup\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964512 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964540 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964561 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-config-path\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964564 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cni-path" (OuterVolumeSpecName: "cni-path") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964644 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964692 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-kernel\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964764 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-hostproc\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964743 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964819 1543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-bpf-maps\") pod \"c90b25bb-7bc8-4cf4-a68e-10accc164104\" (UID: \"c90b25bb-7bc8-4cf4-a68e-10accc164104\") " Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964835 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-hostproc" (OuterVolumeSpecName: "hostproc") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.965242 kubelet[1543]: I1003 00:02:44.964893 1543 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k82jt\" (UniqueName: \"kubernetes.io/projected/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-kube-api-access-k82jt\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: W1003 00:02:44.964862 1543 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c90b25bb-7bc8-4cf4-a68e-10accc164104/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.964931 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3ec8d45-fb7b-4832-bdc5-1db86de5f255-cilium-config-path\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.964964 1543 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-etc-cni-netd\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.965021 1543 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-xtables-lock\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.964951 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.965055 1543 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-lib-modules\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.965086 1543 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-net\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.965115 1543 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cni-path\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.965144 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-run\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.965172 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-cgroup\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.966707 kubelet[1543]: I1003 00:02:44.965201 1543 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-host-proc-sys-kernel\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:44.969865 kubelet[1543]: I1003 00:02:44.969763 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 3 00:02:44.970242 kubelet[1543]: I1003 00:02:44.970189 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 3 00:02:44.970242 kubelet[1543]: I1003 00:02:44.970208 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 3 00:02:44.970331 kubelet[1543]: I1003 00:02:44.970275 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 3 00:02:44.970331 kubelet[1543]: I1003 00:02:44.970291 1543 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-kube-api-access-q7tcr" (OuterVolumeSpecName: "kube-api-access-q7tcr") pod "c90b25bb-7bc8-4cf4-a68e-10accc164104" (UID: "c90b25bb-7bc8-4cf4-a68e-10accc164104"). InnerVolumeSpecName "kube-api-access-q7tcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 3 00:02:45.066392 kubelet[1543]: I1003 00:02:45.066302 1543 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-bpf-maps\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:45.066392 kubelet[1543]: I1003 00:02:45.066346 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-config-path\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:45.066392 kubelet[1543]: I1003 00:02:45.066363 1543 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c90b25bb-7bc8-4cf4-a68e-10accc164104-hostproc\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:45.066392 kubelet[1543]: I1003 00:02:45.066378 1543 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-cilium-ipsec-secrets\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:45.066392 kubelet[1543]: I1003 00:02:45.066396 1543 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-hubble-tls\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:45.066392 kubelet[1543]: I1003 00:02:45.066412 1543 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q7tcr\" (UniqueName: \"kubernetes.io/projected/c90b25bb-7bc8-4cf4-a68e-10accc164104-kube-api-access-q7tcr\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:45.066833 kubelet[1543]: I1003 00:02:45.066432 1543 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c90b25bb-7bc8-4cf4-a68e-10accc164104-clustermesh-secrets\") on node \"10.67.124.213\" DevicePath \"\"" Oct 3 00:02:45.478692 kubelet[1543]: I1003 00:02:45.478584 1543 scope.go:115] "RemoveContainer" containerID="5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd" Oct 3 00:02:45.483246 env[1152]: time="2023-10-03T00:02:45.482287827Z" level=info msg="RemoveContainer for \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\"" Oct 3 00:02:45.485923 env[1152]: time="2023-10-03T00:02:45.485831666Z" level=info msg="RemoveContainer for \"5d810e0bec4105e0bb7a0e7b4f6eb8b2c4f5be5dbb0c5fb6f2e0b5d86c754ebd\" returns successfully" Oct 3 00:02:45.486285 kubelet[1543]: I1003 00:02:45.486238 1543 scope.go:115] "RemoveContainer" containerID="0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82" Oct 3 00:02:45.486960 env[1152]: time="2023-10-03T00:02:45.486946180Z" level=info msg="RemoveContainer for \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\"" Oct 3 00:02:45.487790 systemd[1]: Removed slice kubepods-burstable-podc90b25bb_7bc8_4cf4_a68e_10accc164104.slice. Oct 3 00:02:45.488140 env[1152]: time="2023-10-03T00:02:45.488125954Z" level=info msg="RemoveContainer for \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\" returns successfully" Oct 3 00:02:45.488212 kubelet[1543]: I1003 00:02:45.488204 1543 scope.go:115] "RemoveContainer" containerID="0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82" Oct 3 00:02:45.488342 env[1152]: time="2023-10-03T00:02:45.488304718Z" level=error msg="ContainerStatus for \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\": not found" Oct 3 00:02:45.488432 kubelet[1543]: E1003 00:02:45.488427 1543 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\": not found" containerID="0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82" Oct 3 00:02:45.488471 kubelet[1543]: I1003 00:02:45.488444 1543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82} err="failed to get container status \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\": rpc error: code = NotFound desc = an error occurred when try to find container \"0134a5f564620d3ec61d8a99c5c6cc33f35dd314ae5c42bfbe27fac15c340c82\": not found" Oct 3 00:02:45.488549 systemd[1]: Removed slice kubepods-besteffort-podd3ec8d45_fb7b_4832_bdc5_1db86de5f255.slice. Oct 3 00:02:45.571332 systemd[1]: var-lib-kubelet-pods-d3ec8d45\x2dfb7b\x2d4832\x2dbdc5\x2d1db86de5f255-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk82jt.mount: Deactivated successfully. Oct 3 00:02:45.571602 systemd[1]: var-lib-kubelet-pods-c90b25bb\x2d7bc8\x2d4cf4\x2da68e\x2d10accc164104-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7tcr.mount: Deactivated successfully. Oct 3 00:02:45.571784 systemd[1]: var-lib-kubelet-pods-c90b25bb\x2d7bc8\x2d4cf4\x2da68e\x2d10accc164104-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 3 00:02:45.571954 systemd[1]: var-lib-kubelet-pods-c90b25bb\x2d7bc8\x2d4cf4\x2da68e\x2d10accc164104-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 3 00:02:45.572136 systemd[1]: var-lib-kubelet-pods-c90b25bb\x2d7bc8\x2d4cf4\x2da68e\x2d10accc164104-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 3 00:02:45.842897 kubelet[1543]: E1003 00:02:45.842823 1543 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"