Oct 2 19:13:14.645567 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:13:14.645581 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:13:14.645587 kernel: Disabled fast string operations Oct 2 19:13:14.645591 kernel: BIOS-provided physical RAM map: Oct 2 19:13:14.645595 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 2 19:13:14.645599 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 2 19:13:14.645605 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 2 19:13:14.645609 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 2 19:13:14.645613 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 2 19:13:14.645617 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 2 19:13:14.645621 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 2 19:13:14.645625 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 2 19:13:14.645629 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 2 19:13:14.645633 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 2 19:13:14.645639 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 2 19:13:14.645644 kernel: NX (Execute Disable) protection: active Oct 2 19:13:14.645648 kernel: SMBIOS 2.7 present. Oct 2 19:13:14.645653 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 2 19:13:14.645657 kernel: vmware: hypercall mode: 0x00 Oct 2 19:13:14.645661 kernel: Hypervisor detected: VMware Oct 2 19:13:14.645667 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 2 19:13:14.645671 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 2 19:13:14.645675 kernel: vmware: using clock offset of 3531902150 ns Oct 2 19:13:14.645680 kernel: tsc: Detected 3408.000 MHz processor Oct 2 19:13:14.645685 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:13:14.645690 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:13:14.645694 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 2 19:13:14.645699 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:13:14.645704 kernel: total RAM covered: 3072M Oct 2 19:13:14.645709 kernel: Found optimal setting for mtrr clean up Oct 2 19:13:14.645714 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 2 19:13:14.645719 kernel: Using GB pages for direct mapping Oct 2 19:13:14.645723 kernel: ACPI: Early table checksum verification disabled Oct 2 19:13:14.645728 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 2 19:13:14.645732 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 2 19:13:14.645737 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 2 19:13:14.645741 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 2 19:13:14.645746 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 2 19:13:14.645750 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 2 19:13:14.645755 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 2 19:13:14.645762 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 2 19:13:14.645767 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 2 19:13:14.645772 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 2 19:13:14.645777 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 2 19:13:14.645782 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 2 19:13:14.645787 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 2 19:13:14.645792 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 2 19:13:14.645797 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 2 19:13:14.645802 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 2 19:13:14.645807 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 2 19:13:14.645812 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 2 19:13:14.645817 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 2 19:13:14.645821 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 2 19:13:14.645827 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 2 19:13:14.645832 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 2 19:13:14.645837 kernel: system APIC only can use physical flat Oct 2 19:13:14.645842 kernel: Setting APIC routing to physical flat. Oct 2 19:13:14.645846 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:13:14.645851 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 2 19:13:14.645856 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 2 19:13:14.645861 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 2 19:13:14.645865 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 2 19:13:14.645871 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 2 19:13:14.645876 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 2 19:13:14.645881 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 2 19:13:14.645885 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Oct 2 19:13:14.645890 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Oct 2 19:13:14.645895 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Oct 2 19:13:14.645900 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Oct 2 19:13:14.645904 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Oct 2 19:13:14.645909 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Oct 2 19:13:14.645914 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Oct 2 19:13:14.645919 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Oct 2 19:13:14.645924 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Oct 2 19:13:14.645929 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Oct 2 19:13:14.645934 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Oct 2 19:13:14.645938 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Oct 2 19:13:14.645943 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Oct 2 19:13:14.645948 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Oct 2 19:13:14.645953 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Oct 2 19:13:14.645957 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Oct 2 19:13:14.645962 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Oct 2 19:13:14.645968 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Oct 2 19:13:14.645973 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Oct 2 19:13:14.645978 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Oct 2 19:13:14.645983 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Oct 2 19:13:14.645987 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Oct 2 19:13:14.645992 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Oct 2 19:13:14.645997 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Oct 2 19:13:14.646001 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Oct 2 19:13:14.646006 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Oct 2 19:13:14.646012 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Oct 2 19:13:14.646017 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Oct 2 19:13:14.646021 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Oct 2 19:13:14.646026 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Oct 2 19:13:14.646031 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Oct 2 19:13:14.646036 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Oct 2 19:13:14.646040 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Oct 2 19:13:14.646045 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Oct 2 19:13:14.646050 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Oct 2 19:13:14.646055 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Oct 2 19:13:14.646060 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Oct 2 19:13:14.646065 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Oct 2 19:13:14.646070 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Oct 2 19:13:14.646081 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Oct 2 19:13:14.650116 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Oct 2 19:13:14.650123 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Oct 2 19:13:14.650128 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Oct 2 19:13:14.650133 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Oct 2 19:13:14.650138 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Oct 2 19:13:14.650143 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Oct 2 19:13:14.650150 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Oct 2 19:13:14.650159 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Oct 2 19:13:14.650164 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Oct 2 19:13:14.650169 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Oct 2 19:13:14.650174 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Oct 2 19:13:14.650179 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Oct 2 19:13:14.650184 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Oct 2 19:13:14.650194 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Oct 2 19:13:14.650199 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Oct 2 19:13:14.650205 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Oct 2 19:13:14.650210 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Oct 2 19:13:14.650215 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Oct 2 19:13:14.650221 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Oct 2 19:13:14.650226 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Oct 2 19:13:14.650231 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Oct 2 19:13:14.650236 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Oct 2 19:13:14.650241 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Oct 2 19:13:14.650247 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Oct 2 19:13:14.650253 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Oct 2 19:13:14.650258 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Oct 2 19:13:14.650263 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Oct 2 19:13:14.650268 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Oct 2 19:13:14.650273 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Oct 2 19:13:14.650278 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Oct 2 19:13:14.650283 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Oct 2 19:13:14.650289 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Oct 2 19:13:14.650294 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Oct 2 19:13:14.650300 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Oct 2 19:13:14.650305 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Oct 2 19:13:14.650310 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Oct 2 19:13:14.650316 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Oct 2 19:13:14.650321 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Oct 2 19:13:14.650326 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Oct 2 19:13:14.650331 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Oct 2 19:13:14.650336 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Oct 2 19:13:14.650341 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Oct 2 19:13:14.650346 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Oct 2 19:13:14.650352 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Oct 2 19:13:14.650357 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Oct 2 19:13:14.650363 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Oct 2 19:13:14.650368 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Oct 2 19:13:14.650373 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Oct 2 19:13:14.650378 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Oct 2 19:13:14.650383 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Oct 2 19:13:14.650388 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Oct 2 19:13:14.650394 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Oct 2 19:13:14.650400 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Oct 2 19:13:14.650405 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Oct 2 19:13:14.650410 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Oct 2 19:13:14.650415 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Oct 2 19:13:14.650420 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Oct 2 19:13:14.650426 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Oct 2 19:13:14.650431 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Oct 2 19:13:14.650436 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Oct 2 19:13:14.650441 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Oct 2 19:13:14.650446 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Oct 2 19:13:14.650452 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Oct 2 19:13:14.650457 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Oct 2 19:13:14.650463 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Oct 2 19:13:14.650468 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Oct 2 19:13:14.650473 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Oct 2 19:13:14.650478 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Oct 2 19:13:14.650483 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Oct 2 19:13:14.650488 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Oct 2 19:13:14.650493 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Oct 2 19:13:14.650500 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Oct 2 19:13:14.650505 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Oct 2 19:13:14.650510 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Oct 2 19:13:14.650515 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Oct 2 19:13:14.650520 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Oct 2 19:13:14.650525 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Oct 2 19:13:14.650530 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Oct 2 19:13:14.650536 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Oct 2 19:13:14.650541 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Oct 2 19:13:14.650546 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 2 19:13:14.650552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 2 19:13:14.650558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 2 19:13:14.650563 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Oct 2 19:13:14.650569 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Oct 2 19:13:14.650574 kernel: Zone ranges: Oct 2 19:13:14.650580 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:13:14.650585 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 2 19:13:14.650591 kernel: Normal empty Oct 2 19:13:14.650596 kernel: Movable zone start for each node Oct 2 19:13:14.650602 kernel: Early memory node ranges Oct 2 19:13:14.650608 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 2 19:13:14.650613 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 2 19:13:14.650618 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 2 19:13:14.650623 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 2 19:13:14.650628 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:13:14.650634 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 2 19:13:14.650639 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 2 19:13:14.650644 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 2 19:13:14.650650 kernel: system APIC only can use physical flat Oct 2 19:13:14.650656 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 2 19:13:14.650661 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 2 19:13:14.650666 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 2 19:13:14.650671 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 2 19:13:14.650676 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 2 19:13:14.650682 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 2 19:13:14.650687 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 2 19:13:14.650692 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 2 19:13:14.650697 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 2 19:13:14.650703 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 2 19:13:14.650708 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 2 19:13:14.650714 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 2 19:13:14.650719 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 2 19:13:14.650724 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 2 19:13:14.650729 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 2 19:13:14.650734 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 2 19:13:14.650739 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 2 19:13:14.650744 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 2 19:13:14.650750 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 2 19:13:14.650755 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 2 19:13:14.650761 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 2 19:13:14.650766 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 2 19:13:14.650771 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 2 19:13:14.650776 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 2 19:13:14.650781 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 2 19:13:14.650786 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 2 19:13:14.650791 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 2 19:13:14.650797 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 2 19:13:14.650803 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 2 19:13:14.650808 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 2 19:13:14.650813 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 2 19:13:14.650818 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 2 19:13:14.650823 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 2 19:13:14.650829 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 2 19:13:14.650834 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 2 19:13:14.650839 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 2 19:13:14.650844 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 2 19:13:14.650850 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 2 19:13:14.650855 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 2 19:13:14.650860 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 2 19:13:14.650865 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 2 19:13:14.650871 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 2 19:13:14.650876 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 2 19:13:14.650881 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 2 19:13:14.650886 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 2 19:13:14.650891 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 2 19:13:14.650897 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 2 19:13:14.650902 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 2 19:13:14.650907 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 2 19:13:14.650913 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 2 19:13:14.650918 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 2 19:13:14.650923 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 2 19:13:14.650928 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 2 19:13:14.650933 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 2 19:13:14.650939 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 2 19:13:14.650944 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 2 19:13:14.650950 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 2 19:13:14.650955 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 2 19:13:14.650960 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 2 19:13:14.650965 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 2 19:13:14.650970 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 2 19:13:14.650975 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 2 19:13:14.650981 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 2 19:13:14.650986 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 2 19:13:14.650991 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 2 19:13:14.650997 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 2 19:13:14.651002 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 2 19:13:14.651008 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 2 19:13:14.651013 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 2 19:13:14.651018 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 2 19:13:14.651023 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 2 19:13:14.651028 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 2 19:13:14.651034 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 2 19:13:14.651039 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 2 19:13:14.651044 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 2 19:13:14.651050 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 2 19:13:14.651055 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 2 19:13:14.651060 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 2 19:13:14.651065 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 2 19:13:14.651071 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 2 19:13:14.651084 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 2 19:13:14.651091 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 2 19:13:14.651096 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 2 19:13:14.651101 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 2 19:13:14.651108 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 2 19:13:14.651113 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 2 19:13:14.651118 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 2 19:13:14.651123 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 2 19:13:14.651128 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 2 19:13:14.651133 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 2 19:13:14.651138 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 2 19:13:14.651143 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 2 19:13:14.651149 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 2 19:13:14.651154 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 2 19:13:14.651160 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 2 19:13:14.651165 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 2 19:13:14.651170 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 2 19:13:14.651176 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 2 19:13:14.651180 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 2 19:13:14.651186 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 2 19:13:14.651191 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 2 19:13:14.651199 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 2 19:13:14.651206 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 2 19:13:14.651213 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 2 19:13:14.651218 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 2 19:13:14.651223 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 2 19:13:14.651228 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 2 19:13:14.651233 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 2 19:13:14.651238 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 2 19:13:14.651244 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 2 19:13:14.651249 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 2 19:13:14.651254 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 2 19:13:14.651260 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 2 19:13:14.651265 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 2 19:13:14.651271 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 2 19:13:14.651276 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 2 19:13:14.651281 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 2 19:13:14.651286 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 2 19:13:14.651291 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 2 19:13:14.651296 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 2 19:13:14.651302 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 2 19:13:14.651307 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 2 19:13:14.651313 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 2 19:13:14.651318 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 2 19:13:14.651323 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 2 19:13:14.651328 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 2 19:13:14.651333 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 2 19:13:14.651338 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 2 19:13:14.651344 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:13:14.651349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 2 19:13:14.651354 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:13:14.651361 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 2 19:13:14.651366 kernel: TSC deadline timer available Oct 2 19:13:14.651371 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Oct 2 19:13:14.651376 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 2 19:13:14.651381 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 2 19:13:14.651387 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:13:14.651392 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Oct 2 19:13:14.651398 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Oct 2 19:13:14.651403 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Oct 2 19:13:14.651409 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 2 19:13:14.651415 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 2 19:13:14.651420 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 2 19:13:14.651425 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 2 19:13:14.651430 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 2 19:13:14.651435 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 2 19:13:14.651440 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 2 19:13:14.651452 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 2 19:13:14.651458 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 2 19:13:14.651465 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 2 19:13:14.651470 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 2 19:13:14.651476 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 2 19:13:14.651481 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 2 19:13:14.651487 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 2 19:13:14.651493 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 2 19:13:14.651499 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 2 19:13:14.651505 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Oct 2 19:13:14.651511 kernel: Policy zone: DMA32 Oct 2 19:13:14.651517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:13:14.651523 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:13:14.651529 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 2 19:13:14.651535 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 2 19:13:14.651540 kernel: printk: log_buf_len min size: 262144 bytes Oct 2 19:13:14.651546 kernel: printk: log_buf_len: 1048576 bytes Oct 2 19:13:14.651551 kernel: printk: early log buf free: 239728(91%) Oct 2 19:13:14.651558 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:13:14.651564 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:13:14.651569 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:13:14.651575 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 153416K reserved, 0K cma-reserved) Oct 2 19:13:14.651581 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 2 19:13:14.651586 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:13:14.651592 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:13:14.651598 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:13:14.651604 kernel: rcu: RCU event tracing is enabled. Oct 2 19:13:14.651610 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 2 19:13:14.651616 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:13:14.651621 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:13:14.651627 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:13:14.651632 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 2 19:13:14.651638 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 2 19:13:14.651645 kernel: random: crng init done Oct 2 19:13:14.651650 kernel: Console: colour VGA+ 80x25 Oct 2 19:13:14.651656 kernel: printk: console [tty0] enabled Oct 2 19:13:14.651661 kernel: printk: console [ttyS0] enabled Oct 2 19:13:14.651667 kernel: ACPI: Core revision 20210730 Oct 2 19:13:14.651673 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 2 19:13:14.651678 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:13:14.651684 kernel: x2apic enabled Oct 2 19:13:14.651690 kernel: Switched APIC routing to physical x2apic. Oct 2 19:13:14.651696 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:13:14.651702 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 2 19:13:14.651708 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 2 19:13:14.651713 kernel: Disabled fast string operations Oct 2 19:13:14.651719 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 19:13:14.651724 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 19:13:14.651730 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:13:14.651736 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Oct 2 19:13:14.651742 kernel: Spectre V2 : Mitigation: Enhanced IBRS Oct 2 19:13:14.651748 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:13:14.651754 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 2 19:13:14.651760 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 2 19:13:14.651765 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:13:14.651771 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:13:14.651777 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:13:14.651782 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 2 19:13:14.651788 kernel: GDS: Unknown: Dependent on hypervisor status Oct 2 19:13:14.651793 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:13:14.651800 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:13:14.651806 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:13:14.651811 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:13:14.651817 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 2 19:13:14.651823 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:13:14.651828 kernel: pid_max: default: 131072 minimum: 1024 Oct 2 19:13:14.651834 kernel: LSM: Security Framework initializing Oct 2 19:13:14.651839 kernel: SELinux: Initializing. Oct 2 19:13:14.651845 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:13:14.651852 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:13:14.651857 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 2 19:13:14.651863 kernel: Performance Events: Skylake events, core PMU driver. Oct 2 19:13:14.651869 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 2 19:13:14.651874 kernel: core: CPUID marked event: 'instructions' unavailable Oct 2 19:13:14.651880 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 2 19:13:14.651885 kernel: core: CPUID marked event: 'cache references' unavailable Oct 2 19:13:14.651891 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 2 19:13:14.651897 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 2 19:13:14.651903 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 2 19:13:14.651908 kernel: ... version: 1 Oct 2 19:13:14.651914 kernel: ... bit width: 48 Oct 2 19:13:14.651919 kernel: ... generic registers: 4 Oct 2 19:13:14.651925 kernel: ... value mask: 0000ffffffffffff Oct 2 19:13:14.651930 kernel: ... max period: 000000007fffffff Oct 2 19:13:14.651936 kernel: ... fixed-purpose events: 0 Oct 2 19:13:14.651941 kernel: ... event mask: 000000000000000f Oct 2 19:13:14.651948 kernel: signal: max sigframe size: 1776 Oct 2 19:13:14.651954 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:13:14.651959 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:13:14.651965 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:13:14.651970 kernel: x86: Booting SMP configuration: Oct 2 19:13:14.651976 kernel: .... node #0, CPUs: #1 Oct 2 19:13:14.651981 kernel: Disabled fast string operations Oct 2 19:13:14.651987 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Oct 2 19:13:14.651993 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 2 19:13:14.651998 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:13:14.652005 kernel: smpboot: Max logical packages: 128 Oct 2 19:13:14.652010 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 2 19:13:14.652016 kernel: devtmpfs: initialized Oct 2 19:13:14.652022 kernel: x86/mm: Memory block size: 128MB Oct 2 19:13:14.652027 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 2 19:13:14.652033 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:13:14.652039 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 2 19:13:14.652044 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:13:14.652050 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:13:14.652056 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:13:14.652062 kernel: audit: type=2000 audit(1696273993.060:1): state=initialized audit_enabled=0 res=1 Oct 2 19:13:14.652068 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:13:14.652081 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:13:14.652089 kernel: cpuidle: using governor menu Oct 2 19:13:14.652095 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 2 19:13:14.652100 kernel: ACPI: bus type PCI registered Oct 2 19:13:14.652106 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:13:14.652111 kernel: dca service started, version 1.12.1 Oct 2 19:13:14.652119 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Oct 2 19:13:14.652125 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Oct 2 19:13:14.652130 kernel: PCI: Using configuration type 1 for base access Oct 2 19:13:14.652137 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:13:14.652144 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:13:14.652149 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:13:14.652156 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:13:14.652162 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:13:14.652167 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:13:14.652174 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:13:14.652179 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:13:14.652185 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:13:14.652191 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:13:14.652197 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:13:14.652202 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 2 19:13:14.652209 kernel: ACPI: Interpreter enabled Oct 2 19:13:14.652215 kernel: ACPI: PM: (supports S0 S1 S5) Oct 2 19:13:14.652220 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:13:14.652227 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:13:14.652234 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 2 19:13:14.652239 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 2 19:13:14.652315 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:13:14.652365 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 2 19:13:14.652409 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 2 19:13:14.652418 kernel: PCI host bridge to bus 0000:00 Oct 2 19:13:14.652466 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:13:14.652507 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Oct 2 19:13:14.652546 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Oct 2 19:13:14.652585 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Oct 2 19:13:14.652624 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Oct 2 19:13:14.652662 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 2 19:13:14.652700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:13:14.652741 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 2 19:13:14.652780 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 2 19:13:14.652831 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Oct 2 19:13:14.652882 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Oct 2 19:13:14.652930 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Oct 2 19:13:14.652981 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Oct 2 19:13:14.653027 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Oct 2 19:13:14.653073 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:13:14.653127 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:13:14.653175 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:13:14.653219 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:13:14.653266 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Oct 2 19:13:14.653311 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 2 19:13:14.653357 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 2 19:13:14.653406 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Oct 2 19:13:14.653451 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Oct 2 19:13:14.653495 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Oct 2 19:13:14.653543 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Oct 2 19:13:14.653588 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Oct 2 19:13:14.653635 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Oct 2 19:13:14.653679 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Oct 2 19:13:14.653722 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Oct 2 19:13:14.653765 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:13:14.653813 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Oct 2 19:13:14.653863 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.653910 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.653961 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654007 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654054 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654106 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654157 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654202 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654253 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654299 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654346 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654394 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654441 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654492 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654545 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654594 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654642 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654687 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654736 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654782 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654832 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654876 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.654926 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.654970 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655018 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655062 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655125 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655172 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655220 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655264 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655311 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655355 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655404 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655450 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655498 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655543 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655590 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655635 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655682 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655729 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655778 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655823 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655872 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.655917 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.655964 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656012 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656059 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656116 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656171 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656216 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656264 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656314 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656361 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656405 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656453 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656497 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656547 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656595 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656642 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656687 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656735 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656780 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656827 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:13:14.656872 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.656921 kernel: pci_bus 0000:01: extended config space not accessible Oct 2 19:13:14.656966 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 19:13:14.657012 kernel: pci_bus 0000:02: extended config space not accessible Oct 2 19:13:14.657021 kernel: acpiphp: Slot [32] registered Oct 2 19:13:14.657027 kernel: acpiphp: Slot [33] registered Oct 2 19:13:14.657032 kernel: acpiphp: Slot [34] registered Oct 2 19:13:14.657038 kernel: acpiphp: Slot [35] registered Oct 2 19:13:14.657045 kernel: acpiphp: Slot [36] registered Oct 2 19:13:14.657051 kernel: acpiphp: Slot [37] registered Oct 2 19:13:14.657056 kernel: acpiphp: Slot [38] registered Oct 2 19:13:14.657062 kernel: acpiphp: Slot [39] registered Oct 2 19:13:14.657068 kernel: acpiphp: Slot [40] registered Oct 2 19:13:14.657073 kernel: acpiphp: Slot [41] registered Oct 2 19:13:14.657088 kernel: acpiphp: Slot [42] registered Oct 2 19:13:14.657094 kernel: acpiphp: Slot [43] registered Oct 2 19:13:14.657100 kernel: acpiphp: Slot [44] registered Oct 2 19:13:14.657105 kernel: acpiphp: Slot [45] registered Oct 2 19:13:14.657114 kernel: acpiphp: Slot [46] registered Oct 2 19:13:14.657120 kernel: acpiphp: Slot [47] registered Oct 2 19:13:14.657126 kernel: acpiphp: Slot [48] registered Oct 2 19:13:14.657131 kernel: acpiphp: Slot [49] registered Oct 2 19:13:14.657138 kernel: acpiphp: Slot [50] registered Oct 2 19:13:14.657144 kernel: acpiphp: Slot [51] registered Oct 2 19:13:14.657150 kernel: acpiphp: Slot [52] registered Oct 2 19:13:14.657155 kernel: acpiphp: Slot [53] registered Oct 2 19:13:14.657161 kernel: acpiphp: Slot [54] registered Oct 2 19:13:14.657167 kernel: acpiphp: Slot [55] registered Oct 2 19:13:14.657173 kernel: acpiphp: Slot [56] registered Oct 2 19:13:14.657178 kernel: acpiphp: Slot [57] registered Oct 2 19:13:14.657184 kernel: acpiphp: Slot [58] registered Oct 2 19:13:14.657190 kernel: acpiphp: Slot [59] registered Oct 2 19:13:14.657195 kernel: acpiphp: Slot [60] registered Oct 2 19:13:14.657201 kernel: acpiphp: Slot [61] registered Oct 2 19:13:14.657207 kernel: acpiphp: Slot [62] registered Oct 2 19:13:14.657212 kernel: acpiphp: Slot [63] registered Oct 2 19:13:14.657263 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 2 19:13:14.657312 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 2 19:13:14.657355 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 2 19:13:14.657400 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 2 19:13:14.657444 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 2 19:13:14.657489 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Oct 2 19:13:14.657533 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Oct 2 19:13:14.657577 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Oct 2 19:13:14.657623 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Oct 2 19:13:14.657667 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 2 19:13:14.657711 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 2 19:13:14.657755 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 2 19:13:14.657806 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Oct 2 19:13:14.657852 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Oct 2 19:13:14.657903 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 2 19:13:14.657953 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 2 19:13:14.657999 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 2 19:13:14.658053 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 2 19:13:14.658144 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 2 19:13:14.658200 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 2 19:13:14.658257 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 2 19:13:14.658304 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 2 19:13:14.658363 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 2 19:13:14.658426 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 2 19:13:14.658475 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 2 19:13:14.658520 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 2 19:13:14.658566 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 2 19:13:14.658610 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 2 19:13:14.658655 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 2 19:13:14.658699 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 2 19:13:14.658746 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 2 19:13:14.658790 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 2 19:13:14.658836 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 2 19:13:14.658880 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 2 19:13:14.658927 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 2 19:13:14.658972 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 2 19:13:14.659016 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 2 19:13:14.659060 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 2 19:13:14.659114 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 2 19:13:14.659162 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 2 19:13:14.659208 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 2 19:13:14.659251 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 2 19:13:14.659299 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 2 19:13:14.659343 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 2 19:13:14.659394 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Oct 2 19:13:14.659441 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Oct 2 19:13:14.659486 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Oct 2 19:13:14.659532 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Oct 2 19:13:14.659577 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Oct 2 19:13:14.659623 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 2 19:13:14.659671 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 2 19:13:14.659717 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:13:14.659762 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 2 19:13:14.659807 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 2 19:13:14.659852 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 2 19:13:14.659897 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 2 19:13:14.659948 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 2 19:13:14.660000 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 2 19:13:14.660047 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 2 19:13:14.660104 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 2 19:13:14.660152 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 2 19:13:14.660202 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 2 19:13:14.660247 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 2 19:13:14.660291 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 2 19:13:14.660336 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 2 19:13:14.660383 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 2 19:13:14.660428 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 2 19:13:14.660473 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 2 19:13:14.660517 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 2 19:13:14.660561 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 2 19:13:14.660605 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 2 19:13:14.660649 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 2 19:13:14.660693 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 2 19:13:14.660739 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 2 19:13:14.660782 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 2 19:13:14.660827 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 2 19:13:14.660870 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 2 19:13:14.660914 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 2 19:13:14.660958 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 2 19:13:14.661003 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 2 19:13:14.661047 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 2 19:13:14.661202 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 2 19:13:14.663888 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 2 19:13:14.663947 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 2 19:13:14.663996 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 2 19:13:14.664043 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 2 19:13:14.664095 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 2 19:13:14.664141 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 2 19:13:14.664223 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 2 19:13:14.664291 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 2 19:13:14.664335 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 2 19:13:14.664382 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 2 19:13:14.664426 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 2 19:13:14.664470 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 2 19:13:14.664515 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 2 19:13:14.664560 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 2 19:13:14.664607 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 2 19:13:14.664652 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 2 19:13:14.664696 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 2 19:13:14.664741 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 2 19:13:14.664786 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 2 19:13:14.664831 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 2 19:13:14.664875 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 2 19:13:14.664919 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 2 19:13:14.664964 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 2 19:13:14.665010 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 2 19:13:14.665054 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 2 19:13:14.675812 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 2 19:13:14.675872 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 2 19:13:14.675923 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 2 19:13:14.675975 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 2 19:13:14.676020 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 2 19:13:14.676072 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 2 19:13:14.676127 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 2 19:13:14.676177 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 2 19:13:14.676222 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 2 19:13:14.676268 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 2 19:13:14.676314 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 2 19:13:14.676358 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 2 19:13:14.676407 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 2 19:13:14.676461 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 2 19:13:14.676506 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 2 19:13:14.676555 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 2 19:13:14.676602 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 2 19:13:14.676647 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 2 19:13:14.676690 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 2 19:13:14.676736 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 2 19:13:14.676779 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 2 19:13:14.676826 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 2 19:13:14.676874 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 2 19:13:14.676921 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 2 19:13:14.676969 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 2 19:13:14.676978 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 2 19:13:14.676984 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 2 19:13:14.676990 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 2 19:13:14.676996 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:13:14.677002 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 2 19:13:14.677009 kernel: iommu: Default domain type: Translated Oct 2 19:13:14.677015 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:13:14.677092 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 2 19:13:14.677142 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:13:14.677186 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 2 19:13:14.677195 kernel: vgaarb: loaded Oct 2 19:13:14.677200 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:13:14.677206 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:13:14.677214 kernel: PTP clock support registered Oct 2 19:13:14.677220 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:13:14.677226 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:13:14.677232 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 2 19:13:14.677237 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 2 19:13:14.677243 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 2 19:13:14.677249 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 2 19:13:14.677255 kernel: clocksource: Switched to clocksource tsc-early Oct 2 19:13:14.677261 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:13:14.677268 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:13:14.677274 kernel: pnp: PnP ACPI init Oct 2 19:13:14.677322 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 2 19:13:14.677365 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 2 19:13:14.677406 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 2 19:13:14.677449 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 2 19:13:14.677493 kernel: pnp 00:06: [dma 2] Oct 2 19:13:14.677540 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 2 19:13:14.677581 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 2 19:13:14.677637 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 2 19:13:14.677647 kernel: pnp: PnP ACPI: found 8 devices Oct 2 19:13:14.677653 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:13:14.677659 kernel: NET: Registered PF_INET protocol family Oct 2 19:13:14.677665 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:13:14.677671 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 2 19:13:14.677678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:13:14.677684 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:13:14.677690 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 19:13:14.677695 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 2 19:13:14.677701 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:13:14.677707 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:13:14.677713 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:13:14.677719 kernel: NET: Registered PF_XDP protocol family Oct 2 19:13:14.677768 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 2 19:13:14.677823 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 2 19:13:14.677873 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 2 19:13:14.677925 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 2 19:13:14.677974 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 2 19:13:14.678024 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 2 19:13:14.678102 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 2 19:13:14.678154 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 2 19:13:14.678202 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 2 19:13:14.678248 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 2 19:13:14.678302 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 2 19:13:14.678364 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 2 19:13:14.678415 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 2 19:13:14.678462 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 2 19:13:14.678508 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 2 19:13:14.678564 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 2 19:13:14.678615 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 2 19:13:14.678661 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 2 19:13:14.678721 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 2 19:13:14.678777 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 2 19:13:14.678841 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 2 19:13:14.678897 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 2 19:13:14.678944 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 2 19:13:14.678990 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Oct 2 19:13:14.679036 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Oct 2 19:13:14.679094 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679142 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679211 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679268 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679314 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679364 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679413 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679461 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679520 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679582 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679637 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679682 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679726 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679774 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679833 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.679907 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.679959 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680004 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680049 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680146 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680210 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680268 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680318 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680365 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680415 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680461 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680507 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680556 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680612 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680670 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680730 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680798 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680843 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680887 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.680931 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.680975 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681020 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681140 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681190 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681242 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681298 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681343 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681388 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681441 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681486 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681550 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681608 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681655 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681702 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681746 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681806 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681868 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.681922 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.681966 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682010 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682063 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682136 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682203 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682256 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682318 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682364 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682410 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682458 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682518 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682582 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682652 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682704 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682753 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682798 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682847 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682893 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.682937 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.682985 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.683033 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.683113 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.683191 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.683239 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.683286 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.683330 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.683374 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.683418 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.683461 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.683520 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 2 19:13:14.683583 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:13:14.683637 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 19:13:14.683686 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 2 19:13:14.683734 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 2 19:13:14.683783 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 2 19:13:14.683827 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 2 19:13:14.683882 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Oct 2 19:13:14.683930 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 2 19:13:14.683974 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 2 19:13:14.684033 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 2 19:13:14.684087 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 2 19:13:14.684144 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 2 19:13:14.684212 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 2 19:13:14.684258 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 2 19:13:14.684302 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 2 19:13:14.684350 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 2 19:13:14.684394 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 2 19:13:14.684439 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 2 19:13:14.684492 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 2 19:13:14.684563 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 2 19:13:14.684621 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 2 19:13:14.684675 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 2 19:13:14.684727 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 2 19:13:14.684772 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 2 19:13:14.684820 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 2 19:13:14.684866 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 2 19:13:14.684913 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 2 19:13:14.684962 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 2 19:13:14.685008 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 2 19:13:14.685060 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 2 19:13:14.685126 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 2 19:13:14.685190 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 2 19:13:14.685245 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 2 19:13:14.685292 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 2 19:13:14.685347 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Oct 2 19:13:14.685397 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 2 19:13:14.685445 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 2 19:13:14.685497 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 2 19:13:14.685543 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 2 19:13:14.685605 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 2 19:13:14.685673 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 2 19:13:14.685734 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 2 19:13:14.685780 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 2 19:13:14.685826 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 2 19:13:14.685877 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 2 19:13:14.685928 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 2 19:13:14.685973 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 2 19:13:14.686018 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 2 19:13:14.686062 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 2 19:13:14.686127 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 2 19:13:14.686196 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 2 19:13:14.686253 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 2 19:13:14.686306 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 2 19:13:14.686352 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 2 19:13:14.686400 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 2 19:13:14.686449 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 2 19:13:14.686495 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 2 19:13:14.686540 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 2 19:13:14.686589 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 2 19:13:14.686647 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 2 19:13:14.686700 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 2 19:13:14.686757 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 2 19:13:14.686825 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 2 19:13:14.686872 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 2 19:13:14.686919 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 2 19:13:14.686969 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 2 19:13:14.687019 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 2 19:13:14.687065 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 2 19:13:14.687199 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 2 19:13:14.687260 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 2 19:13:14.687323 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 2 19:13:14.687369 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 2 19:13:14.687413 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 2 19:13:14.687466 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 2 19:13:14.687511 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 2 19:13:14.687556 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 2 19:13:14.687599 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 2 19:13:14.687643 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 2 19:13:14.687697 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 2 19:13:14.687766 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 2 19:13:14.687818 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 2 19:13:14.687868 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 2 19:13:14.687912 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 2 19:13:14.687964 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 2 19:13:14.688010 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 2 19:13:14.688054 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 2 19:13:14.688116 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 2 19:13:14.688172 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 2 19:13:14.688241 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 2 19:13:14.688299 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 2 19:13:14.688360 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 2 19:13:14.688404 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 2 19:13:14.688453 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 2 19:13:14.688498 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 2 19:13:14.688543 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 2 19:13:14.688587 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 2 19:13:14.688846 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 2 19:13:14.688903 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 2 19:13:14.688952 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 2 19:13:14.688998 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 2 19:13:14.689044 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 2 19:13:14.689194 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 2 19:13:14.689271 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 2 19:13:14.689620 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 2 19:13:14.689689 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 2 19:13:14.689741 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 2 19:13:14.689804 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 2 19:13:14.689858 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 2 19:13:14.689904 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 2 19:13:14.689950 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 2 19:13:14.689994 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 2 19:13:14.690041 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 2 19:13:14.690103 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 2 19:13:14.690163 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 2 19:13:14.690238 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 2 19:13:14.690284 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 2 19:13:14.690324 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Oct 2 19:13:14.690364 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Oct 2 19:13:14.690403 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Oct 2 19:13:14.690443 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Oct 2 19:13:14.690483 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Oct 2 19:13:14.690529 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Oct 2 19:13:14.690583 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Oct 2 19:13:14.690638 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 2 19:13:14.690688 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 2 19:13:14.690730 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 2 19:13:14.690773 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 2 19:13:14.690813 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Oct 2 19:13:14.690869 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Oct 2 19:13:14.690926 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Oct 2 19:13:14.690973 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Oct 2 19:13:14.691018 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Oct 2 19:13:14.691059 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Oct 2 19:13:14.691109 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Oct 2 19:13:14.691158 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 2 19:13:14.691200 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 2 19:13:14.691254 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 2 19:13:14.691315 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 2 19:13:14.691367 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 2 19:13:14.691424 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 2 19:13:14.691469 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 2 19:13:14.691513 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 2 19:13:14.691554 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 2 19:13:14.691598 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 2 19:13:14.691643 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 2 19:13:14.691714 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 2 19:13:14.691767 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 2 19:13:14.691815 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 2 19:13:14.691878 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 2 19:13:14.691930 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 2 19:13:14.691986 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 2 19:13:14.692041 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 2 19:13:14.692096 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 2 19:13:14.692154 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 2 19:13:14.692224 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 2 19:13:14.692271 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 2 19:13:14.692317 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 2 19:13:14.692367 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 2 19:13:14.692432 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 2 19:13:14.692487 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 2 19:13:14.692534 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 2 19:13:14.692576 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 2 19:13:14.692629 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 2 19:13:14.692678 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 2 19:13:14.692742 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 2 19:13:14.692792 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 2 19:13:14.692843 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 2 19:13:14.692885 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 2 19:13:14.692944 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 2 19:13:14.692990 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 2 19:13:14.693053 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 2 19:13:14.693144 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 2 19:13:14.693205 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 2 19:13:14.693248 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 2 19:13:14.693307 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 2 19:13:14.693376 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 2 19:13:14.693425 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 2 19:13:14.693468 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 2 19:13:14.693685 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 2 19:13:14.693739 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 2 19:13:14.693786 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 2 19:13:14.693853 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 2 19:13:14.693897 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 2 19:13:14.693944 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 2 19:13:14.693987 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 2 19:13:14.694034 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 2 19:13:14.694100 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 2 19:13:14.694161 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 2 19:13:14.694218 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 2 19:13:14.694267 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 2 19:13:14.694309 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 2 19:13:14.694356 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 2 19:13:14.694412 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 2 19:13:14.694454 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 2 19:13:14.694517 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 2 19:13:14.694567 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 2 19:13:14.694610 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 2 19:13:14.694658 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 2 19:13:14.694703 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 2 19:13:14.694748 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 2 19:13:14.694795 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 2 19:13:14.694849 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 2 19:13:14.694910 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 2 19:13:14.694964 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 2 19:13:14.695009 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 2 19:13:14.695055 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 2 19:13:14.695111 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 2 19:13:14.695170 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 2 19:13:14.695217 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 2 19:13:14.695281 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:13:14.695296 kernel: PCI: CLS 32 bytes, default 64 Oct 2 19:13:14.695305 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:13:14.695312 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 2 19:13:14.695318 kernel: clocksource: Switched to clocksource tsc Oct 2 19:13:14.695324 kernel: Initialise system trusted keyrings Oct 2 19:13:14.695331 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 2 19:13:14.695337 kernel: Key type asymmetric registered Oct 2 19:13:14.695343 kernel: Asymmetric key parser 'x509' registered Oct 2 19:13:14.695349 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:13:14.695356 kernel: io scheduler mq-deadline registered Oct 2 19:13:14.695362 kernel: io scheduler kyber registered Oct 2 19:13:14.695368 kernel: io scheduler bfq registered Oct 2 19:13:14.695419 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 2 19:13:14.695465 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.695513 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 2 19:13:14.695574 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.695634 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 2 19:13:14.695704 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.695751 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 2 19:13:14.695797 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.695844 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 2 19:13:14.695895 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.695952 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 2 19:13:14.696014 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.696389 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 2 19:13:14.696455 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.696506 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 2 19:13:14.696555 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.696638 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 2 19:13:14.696953 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697142 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 2 19:13:14.697198 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697256 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 2 19:13:14.697317 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697376 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 2 19:13:14.697442 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697488 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 2 19:13:14.697534 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697578 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 2 19:13:14.697623 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697685 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 2 19:13:14.697747 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697798 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 2 19:13:14.697850 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.697896 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 2 19:13:14.697942 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.698003 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 2 19:13:14.698066 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.698137 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 2 19:13:14.698185 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.698230 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 2 19:13:14.700168 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700224 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 2 19:13:14.700275 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700323 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 2 19:13:14.700390 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700451 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 2 19:13:14.700516 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700569 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 2 19:13:14.700614 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700659 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 2 19:13:14.700704 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700750 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 2 19:13:14.700793 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700841 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 2 19:13:14.700886 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.700931 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 2 19:13:14.700976 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.701020 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 2 19:13:14.701068 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.701121 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 2 19:13:14.701165 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.701211 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 2 19:13:14.701257 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.701301 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 2 19:13:14.701348 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:13:14.701357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:13:14.701364 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:13:14.701370 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:13:14.701376 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 2 19:13:14.701382 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:13:14.701388 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:13:14.701439 kernel: rtc_cmos 00:01: registered as rtc0 Oct 2 19:13:14.701449 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:13:14.701490 kernel: rtc_cmos 00:01: setting system clock to 2023-10-02T19:13:14 UTC (1696273994) Oct 2 19:13:14.701531 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 2 19:13:14.701539 kernel: fail to initialize ptp_kvm Oct 2 19:13:14.701546 kernel: intel_pstate: CPU model not supported Oct 2 19:13:14.701552 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:13:14.701558 kernel: Segment Routing with IPv6 Oct 2 19:13:14.701843 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:13:14.701852 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:13:14.701858 kernel: Key type dns_resolver registered Oct 2 19:13:14.701864 kernel: IPI shorthand broadcast: enabled Oct 2 19:13:14.701870 kernel: sched_clock: Marking stable (888126535, 223808554)->(1180826522, -68891433) Oct 2 19:13:14.701876 kernel: registered taskstats version 1 Oct 2 19:13:14.701882 kernel: Loading compiled-in X.509 certificates Oct 2 19:13:14.701889 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:13:14.701894 kernel: Key type .fscrypt registered Oct 2 19:13:14.701902 kernel: Key type fscrypt-provisioning registered Oct 2 19:13:14.701908 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:13:14.701914 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:13:14.701920 kernel: ima: No architecture policies found Oct 2 19:13:14.701926 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:13:14.701932 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:13:14.701938 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:13:14.701944 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:13:14.701951 kernel: Run /init as init process Oct 2 19:13:14.701958 kernel: with arguments: Oct 2 19:13:14.701964 kernel: /init Oct 2 19:13:14.701970 kernel: with environment: Oct 2 19:13:14.701976 kernel: HOME=/ Oct 2 19:13:14.701982 kernel: TERM=linux Oct 2 19:13:14.701988 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:13:14.701996 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:13:14.702004 systemd[1]: Detected virtualization vmware. Oct 2 19:13:14.702011 systemd[1]: Detected architecture x86-64. Oct 2 19:13:14.702018 systemd[1]: Running in initrd. Oct 2 19:13:14.702024 systemd[1]: No hostname configured, using default hostname. Oct 2 19:13:14.702030 systemd[1]: Hostname set to . Oct 2 19:13:14.702036 systemd[1]: Initializing machine ID from random generator. Oct 2 19:13:14.702042 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:13:14.702048 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:13:14.702054 systemd[1]: Reached target cryptsetup.target. Oct 2 19:13:14.702061 systemd[1]: Reached target paths.target. Oct 2 19:13:14.702067 systemd[1]: Reached target slices.target. Oct 2 19:13:14.702092 systemd[1]: Reached target swap.target. Oct 2 19:13:14.702099 systemd[1]: Reached target timers.target. Oct 2 19:13:14.702105 systemd[1]: Listening on iscsid.socket. Oct 2 19:13:14.702112 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:13:14.702118 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:13:14.702124 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:13:14.702132 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:13:14.702138 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:13:14.702144 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:13:14.702150 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:13:14.702156 systemd[1]: Reached target sockets.target. Oct 2 19:13:14.702163 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:13:14.702169 systemd[1]: Finished network-cleanup.service. Oct 2 19:13:14.702175 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:13:14.702181 systemd[1]: Starting systemd-journald.service... Oct 2 19:13:14.702189 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:13:14.702195 systemd[1]: Starting systemd-resolved.service... Oct 2 19:13:14.702201 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:13:14.702207 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:13:14.702214 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:13:14.702220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:13:14.702226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:13:14.702233 kernel: audit: type=1130 audit(1696273994.664:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.702240 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:13:14.702246 kernel: audit: type=1130 audit(1696273994.668:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.702252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:13:14.702259 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:13:14.702265 kernel: Bridge firewalling registered Oct 2 19:13:14.702271 systemd[1]: Started systemd-resolved.service. Oct 2 19:13:14.702277 kernel: audit: type=1130 audit(1696273994.689:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.702284 systemd[1]: Reached target nss-lookup.target. Oct 2 19:13:14.702557 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:13:14.702569 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:13:14.702576 kernel: audit: type=1130 audit(1696273994.696:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.702586 systemd-journald[216]: Journal started Oct 2 19:13:14.702619 systemd-journald[216]: Runtime Journal (/run/log/journal/e7e1ce4ebe7048af80545bdc638fbd29) is 4.8M, max 38.8M, 34.0M free. Oct 2 19:13:14.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.647016 systemd-modules-load[217]: Inserted module 'overlay' Oct 2 19:13:14.704714 systemd[1]: Started systemd-journald.service. Oct 2 19:13:14.677634 systemd-modules-load[217]: Inserted module 'br_netfilter' Oct 2 19:13:14.685611 systemd-resolved[218]: Positive Trust Anchors: Oct 2 19:13:14.685618 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:13:14.685639 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:13:14.688712 systemd-resolved[218]: Defaulting to hostname 'linux'. Oct 2 19:13:14.713237 kernel: audit: type=1130 audit(1696273994.705:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.713255 kernel: SCSI subsystem initialized Oct 2 19:13:14.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.713285 dracut-cmdline[233]: dracut-dracut-053 Oct 2 19:13:14.713285 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:13:14.719422 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:13:14.719441 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:13:14.720642 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:13:14.722712 systemd-modules-load[217]: Inserted module 'dm_multipath' Oct 2 19:13:14.723066 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:13:14.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.723554 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:13:14.728225 kernel: audit: type=1130 audit(1696273994.722:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.729805 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:13:14.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.733090 kernel: audit: type=1130 audit(1696273994.728:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.755102 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:13:14.760090 kernel: iscsi: registered transport (tcp) Oct 2 19:13:14.774091 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:13:14.774125 kernel: QLogic iSCSI HBA Driver Oct 2 19:13:14.791425 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:13:14.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.792049 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:13:14.795091 kernel: audit: type=1130 audit(1696273994.790:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:14.829099 kernel: raid6: avx2x4 gen() 45728 MB/s Oct 2 19:13:14.846094 kernel: raid6: avx2x4 xor() 20573 MB/s Oct 2 19:13:14.863086 kernel: raid6: avx2x2 gen() 52836 MB/s Oct 2 19:13:14.880093 kernel: raid6: avx2x2 xor() 31318 MB/s Oct 2 19:13:14.897090 kernel: raid6: avx2x1 gen() 40593 MB/s Oct 2 19:13:14.914094 kernel: raid6: avx2x1 xor() 26493 MB/s Oct 2 19:13:14.931083 kernel: raid6: sse2x4 gen() 21172 MB/s Oct 2 19:13:14.948087 kernel: raid6: sse2x4 xor() 11852 MB/s Oct 2 19:13:14.965087 kernel: raid6: sse2x2 gen() 21322 MB/s Oct 2 19:13:14.982093 kernel: raid6: sse2x2 xor() 13141 MB/s Oct 2 19:13:14.999094 kernel: raid6: sse2x1 gen() 17589 MB/s Oct 2 19:13:15.016329 kernel: raid6: sse2x1 xor() 8752 MB/s Oct 2 19:13:15.016363 kernel: raid6: using algorithm avx2x2 gen() 52836 MB/s Oct 2 19:13:15.016378 kernel: raid6: .... xor() 31318 MB/s, rmw enabled Oct 2 19:13:15.017564 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:13:15.026095 kernel: xor: automatically using best checksumming function avx Oct 2 19:13:15.086096 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:13:15.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:15.091656 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:13:15.093000 audit: BPF prog-id=7 op=LOAD Oct 2 19:13:15.093000 audit: BPF prog-id=8 op=LOAD Oct 2 19:13:15.095441 kernel: audit: type=1130 audit(1696273995.090:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:15.094941 systemd[1]: Starting systemd-udevd.service... Oct 2 19:13:15.103440 systemd-udevd[415]: Using default interface naming scheme 'v252'. Oct 2 19:13:15.106445 systemd[1]: Started systemd-udevd.service. Oct 2 19:13:15.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:15.108268 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:13:15.115956 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Oct 2 19:13:15.132304 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:13:15.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:15.132879 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:13:15.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:15.191165 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:13:15.257554 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 2 19:13:15.257591 kernel: vmw_pvscsi: using 64bit dma Oct 2 19:13:15.257600 kernel: vmw_pvscsi: max_id: 16 Oct 2 19:13:15.257607 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 2 19:13:15.267091 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Oct 2 19:13:15.269088 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 2 19:13:15.271084 kernel: libata version 3.00 loaded. Oct 2 19:13:15.279094 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 2 19:13:15.280087 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 2 19:13:15.280178 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 2 19:13:15.281106 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 2 19:13:15.281119 kernel: vmw_pvscsi: using MSI-X Oct 2 19:13:15.284503 kernel: scsi host1: ata_piix Oct 2 19:13:15.284584 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 2 19:13:15.284642 kernel: scsi host2: ata_piix Oct 2 19:13:15.284662 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 2 19:13:15.287023 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Oct 2 19:13:15.287041 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Oct 2 19:13:15.287049 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 2 19:13:15.297087 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:13:15.302090 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 2 19:13:15.455095 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 2 19:13:15.458103 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 2 19:13:15.467312 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:13:15.467348 kernel: AES CTR mode by8 optimization enabled Oct 2 19:13:15.479094 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Oct 2 19:13:15.479213 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 2 19:13:15.479276 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 2 19:13:15.479333 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 2 19:13:15.480545 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 2 19:13:15.494150 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 2 19:13:15.494277 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:13:15.512085 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:13:15.550090 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:13:15.551086 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 2 19:13:15.879931 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:13:15.880297 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (470) Oct 2 19:13:15.887261 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:13:15.899852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:13:15.932715 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:13:15.932842 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:13:15.933471 systemd[1]: Starting disk-uuid.service... Oct 2 19:13:16.393099 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:13:16.426087 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:13:17.452641 disk-uuid[543]: The operation has completed successfully. Oct 2 19:13:17.453102 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:13:17.490466 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:13:17.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:17.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:17.490516 systemd[1]: Finished disk-uuid.service. Oct 2 19:13:17.491104 systemd[1]: Starting verity-setup.service... Oct 2 19:13:17.513089 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:13:17.577631 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:13:17.577971 systemd[1]: Finished verity-setup.service. Oct 2 19:13:17.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:17.578623 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:13:17.680091 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:13:17.680484 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:13:17.681054 systemd[1]: Starting afterburn-network-kargs.service... Oct 2 19:13:17.681505 systemd[1]: Starting ignition-setup.service... Oct 2 19:13:17.741357 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:13:17.741404 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:13:17.741421 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:13:17.749088 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:13:17.757780 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:13:17.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:17.765837 systemd[1]: Finished ignition-setup.service. Oct 2 19:13:17.766551 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:13:18.009379 systemd[1]: Finished afterburn-network-kargs.service. Oct 2 19:13:18.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.010110 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:13:18.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.054000 audit: BPF prog-id=9 op=LOAD Oct 2 19:13:18.055463 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:13:18.056288 systemd[1]: Starting systemd-networkd.service... Oct 2 19:13:18.069957 systemd-networkd[726]: lo: Link UP Oct 2 19:13:18.069964 systemd-networkd[726]: lo: Gained carrier Oct 2 19:13:18.070245 systemd-networkd[726]: Enumeration completed Oct 2 19:13:18.073710 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 2 19:13:18.073829 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 2 19:13:18.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.070430 systemd-networkd[726]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 2 19:13:18.070482 systemd[1]: Started systemd-networkd.service. Oct 2 19:13:18.070617 systemd[1]: Reached target network.target. Oct 2 19:13:18.071059 systemd[1]: Starting iscsiuio.service... Oct 2 19:13:18.073833 systemd-networkd[726]: ens192: Link UP Oct 2 19:13:18.073835 systemd-networkd[726]: ens192: Gained carrier Oct 2 19:13:18.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.075455 systemd[1]: Started iscsiuio.service. Oct 2 19:13:18.076013 systemd[1]: Starting iscsid.service... Oct 2 19:13:18.079220 iscsid[731]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:13:18.079220 iscsid[731]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:13:18.079220 iscsid[731]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:13:18.079220 iscsid[731]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:13:18.079220 iscsid[731]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:13:18.079220 iscsid[731]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:13:18.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.078902 systemd[1]: Started iscsid.service. Oct 2 19:13:18.079490 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:13:18.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.086268 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:13:18.086410 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:13:18.086499 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:13:18.086589 systemd[1]: Reached target remote-fs.target. Oct 2 19:13:18.087313 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:13:18.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.092601 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:13:18.250821 ignition[598]: Ignition 2.14.0 Oct 2 19:13:18.250829 ignition[598]: Stage: fetch-offline Oct 2 19:13:18.250891 ignition[598]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:13:18.250907 ignition[598]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:13:18.254138 ignition[598]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:13:18.254220 ignition[598]: parsed url from cmdline: "" Oct 2 19:13:18.254222 ignition[598]: no config URL provided Oct 2 19:13:18.254225 ignition[598]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:13:18.254230 ignition[598]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:13:18.254635 ignition[598]: config successfully fetched Oct 2 19:13:18.254650 ignition[598]: parsing config with SHA512: 14819b711c635864cdaf8664b785afac70dd872d857ce75e32886edfc5f6d505816ab0617861bc5342b5e92647f57bacade1fcca9b3e42d8b8d2e74f6cd877d3 Oct 2 19:13:18.271012 unknown[598]: fetched base config from "system" Oct 2 19:13:18.271019 unknown[598]: fetched user config from "vmware" Oct 2 19:13:18.271361 ignition[598]: fetch-offline: fetch-offline passed Oct 2 19:13:18.271402 ignition[598]: Ignition finished successfully Oct 2 19:13:18.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.272239 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:13:18.272393 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:13:18.272861 systemd[1]: Starting ignition-kargs.service... Oct 2 19:13:18.278457 ignition[746]: Ignition 2.14.0 Oct 2 19:13:18.278733 ignition[746]: Stage: kargs Oct 2 19:13:18.278919 ignition[746]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:13:18.279113 ignition[746]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:13:18.280525 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:13:18.281754 ignition[746]: kargs: kargs passed Oct 2 19:13:18.281920 ignition[746]: Ignition finished successfully Oct 2 19:13:18.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.282873 systemd[1]: Finished ignition-kargs.service. Oct 2 19:13:18.283493 systemd[1]: Starting ignition-disks.service... Oct 2 19:13:18.288128 ignition[753]: Ignition 2.14.0 Oct 2 19:13:18.288346 ignition[753]: Stage: disks Oct 2 19:13:18.288515 ignition[753]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:13:18.288811 ignition[753]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:13:18.290179 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:13:18.291808 ignition[753]: disks: disks passed Oct 2 19:13:18.291850 ignition[753]: Ignition finished successfully Oct 2 19:13:18.292456 systemd[1]: Finished ignition-disks.service. Oct 2 19:13:18.292614 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:13:18.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.292727 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:13:18.292888 systemd[1]: Reached target local-fs.target. Oct 2 19:13:18.293046 systemd[1]: Reached target sysinit.target. Oct 2 19:13:18.293202 systemd[1]: Reached target basic.target. Oct 2 19:13:18.293835 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:13:18.334280 systemd-fsck[761]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 19:13:18.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.335072 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:13:18.335642 systemd[1]: Mounting sysroot.mount... Oct 2 19:13:18.345095 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:13:18.345366 systemd[1]: Mounted sysroot.mount. Oct 2 19:13:18.345502 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:13:18.346434 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:13:18.346771 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:13:18.346791 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:13:18.346804 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:13:18.348221 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:13:18.348793 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:13:18.351612 initrd-setup-root[771]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:13:18.355135 initrd-setup-root[779]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:13:18.356955 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:13:18.359026 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:13:18.425904 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:13:18.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.426479 systemd[1]: Starting ignition-mount.service... Oct 2 19:13:18.427106 systemd[1]: Starting sysroot-boot.service... Oct 2 19:13:18.430754 bash[812]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:13:18.435940 ignition[813]: INFO : Ignition 2.14.0 Oct 2 19:13:18.435940 ignition[813]: INFO : Stage: mount Oct 2 19:13:18.436275 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:13:18.436275 ignition[813]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:13:18.437375 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:13:18.439757 ignition[813]: INFO : mount: mount passed Oct 2 19:13:18.441686 ignition[813]: INFO : Ignition finished successfully Oct 2 19:13:18.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.442224 systemd[1]: Finished ignition-mount.service. Oct 2 19:13:18.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:18.447368 systemd[1]: Finished sysroot-boot.service. Oct 2 19:13:18.629878 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:13:18.638101 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (822) Oct 2 19:13:18.640226 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:13:18.640257 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:13:18.640265 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:13:18.644099 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:13:18.646193 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:13:18.647199 systemd[1]: Starting ignition-files.service... Oct 2 19:13:18.657900 ignition[842]: INFO : Ignition 2.14.0 Oct 2 19:13:18.657900 ignition[842]: INFO : Stage: files Oct 2 19:13:18.658421 ignition[842]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:13:18.658421 ignition[842]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:13:18.659269 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:13:18.677034 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:13:18.681614 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:13:18.681614 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:13:18.692140 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:13:18.692375 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:13:18.693299 unknown[842]: wrote ssh authorized keys file for user: core Oct 2 19:13:18.693534 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:13:18.693872 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:13:18.694046 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:13:19.378279 systemd-networkd[726]: ens192: Gained IPv6LL Oct 2 19:13:23.885023 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:13:23.996747 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:13:23.997059 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:13:23.997059 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:13:23.997059 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:13:24.235173 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:13:24.291267 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:13:24.291588 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:13:24.291588 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:13:24.291588 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:13:24.387195 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:13:25.107051 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Oct 2 19:13:25.107588 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:13:25.107808 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:13:25.108015 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:13:25.175064 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:13:26.609199 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Oct 2 19:13:26.609656 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:13:26.609867 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:13:26.610178 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:13:26.610364 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:13:26.610601 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:13:26.615778 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 2 19:13:26.615991 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:13:26.619069 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2218610543" Oct 2 19:13:26.619438 ignition[842]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2218610543": device or resource busy Oct 2 19:13:26.619656 ignition[842]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2218610543", trying btrfs: device or resource busy Oct 2 19:13:26.619884 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2218610543" Oct 2 19:13:26.630853 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2218610543" Oct 2 19:13:26.631113 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (844) Oct 2 19:13:26.645094 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem2218610543" Oct 2 19:13:26.645320 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem2218610543" Oct 2 19:13:26.645521 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 2 19:13:26.645889 systemd[1]: mnt-oem2218610543.mount: Deactivated successfully. Oct 2 19:13:26.649030 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 2 19:13:26.649307 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 2 19:13:26.649510 ignition[842]: INFO : files: op(e): [started] processing unit "vmtoolsd.service" Oct 2 19:13:26.649665 ignition[842]: INFO : files: op(e): [finished] processing unit "vmtoolsd.service" Oct 2 19:13:26.649815 ignition[842]: INFO : files: op(f): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:13:26.649992 ignition[842]: INFO : files: op(f): op(10): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:13:26.650270 ignition[842]: INFO : files: op(f): op(10): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:13:26.650475 ignition[842]: INFO : files: op(f): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:13:26.650642 ignition[842]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Oct 2 19:13:26.650821 ignition[842]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:13:26.651068 ignition[842]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:13:26.651276 ignition[842]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Oct 2 19:13:26.651431 ignition[842]: INFO : files: op(13): [started] processing unit "coreos-metadata.service" Oct 2 19:13:26.651593 ignition[842]: INFO : files: op(13): op(14): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:13:26.651843 ignition[842]: INFO : files: op(13): op(14): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:13:26.652049 ignition[842]: INFO : files: op(13): [finished] processing unit "coreos-metadata.service" Oct 2 19:13:26.652211 ignition[842]: INFO : files: op(15): [started] setting preset to enabled for "vmtoolsd.service" Oct 2 19:13:26.652408 ignition[842]: INFO : files: op(15): [finished] setting preset to enabled for "vmtoolsd.service" Oct 2 19:13:26.652560 ignition[842]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:13:26.652734 ignition[842]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:13:26.652899 ignition[842]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:13:26.653070 ignition[842]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:13:26.653237 ignition[842]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:13:26.653396 ignition[842]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:13:26.808501 ignition[842]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:13:26.808866 ignition[842]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:13:26.809232 ignition[842]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:13:26.809605 ignition[842]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:13:26.809861 ignition[842]: INFO : files: files passed Oct 2 19:13:26.810054 ignition[842]: INFO : Ignition finished successfully Oct 2 19:13:26.811437 systemd[1]: Finished ignition-files.service. Oct 2 19:13:26.815095 kernel: kauditd_printk_skb: 24 callbacks suppressed Oct 2 19:13:26.815124 kernel: audit: type=1130 audit(1696274006.810:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.812737 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:13:26.816274 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:13:26.816892 systemd[1]: Starting ignition-quench.service... Oct 2 19:13:26.821064 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:13:26.821142 systemd[1]: Finished ignition-quench.service. Oct 2 19:13:26.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.824606 initrd-setup-root-after-ignition[868]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:13:26.827551 kernel: audit: type=1130 audit(1696274006.820:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.827571 kernel: audit: type=1131 audit(1696274006.820:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.827547 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:13:26.830857 kernel: audit: type=1130 audit(1696274006.826:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.827733 systemd[1]: Reached target ignition-complete.target. Oct 2 19:13:26.831439 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:13:26.840625 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:13:26.840848 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:13:26.841140 systemd[1]: Reached target initrd-fs.target. Oct 2 19:13:26.841351 systemd[1]: Reached target initrd.target. Oct 2 19:13:26.841577 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:13:26.842238 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:13:26.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.847561 kernel: audit: type=1130 audit(1696274006.839:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.847585 kernel: audit: type=1131 audit(1696274006.840:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.848611 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:13:26.849174 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:13:26.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.852095 kernel: audit: type=1130 audit(1696274006.847:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.856160 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:13:26.856212 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:13:26.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.856764 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:13:26.861253 kernel: audit: type=1130 audit(1696274006.855:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.861275 kernel: audit: type=1131 audit(1696274006.855:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.861344 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:13:26.861573 systemd[1]: Stopped target timers.target. Oct 2 19:13:26.861793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:13:26.861951 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:13:26.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.862278 systemd[1]: Stopped target initrd.target. Oct 2 19:13:26.864787 systemd[1]: Stopped target basic.target. Oct 2 19:13:26.864998 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:13:26.865136 kernel: audit: type=1131 audit(1696274006.861:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.865247 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:13:26.865494 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:13:26.865720 systemd[1]: Stopped target remote-fs.target. Oct 2 19:13:26.865930 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:13:26.866330 systemd[1]: Stopped target sysinit.target. Oct 2 19:13:26.866559 systemd[1]: Stopped target local-fs.target. Oct 2 19:13:26.866760 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:13:26.866962 systemd[1]: Stopped target swap.target. Oct 2 19:13:26.867173 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:13:26.867332 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:13:26.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.867600 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:13:26.867804 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:13:26.867954 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:13:26.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.868260 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:13:26.868417 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:13:26.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.868682 systemd[1]: Stopped target paths.target. Oct 2 19:13:26.868876 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:13:26.870106 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:13:26.870327 systemd[1]: Stopped target slices.target. Oct 2 19:13:26.870522 systemd[1]: Stopped target sockets.target. Oct 2 19:13:26.870739 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:13:26.870902 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:13:26.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.871193 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:13:26.871342 systemd[1]: Stopped ignition-files.service. Oct 2 19:13:26.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.871976 systemd[1]: Stopping ignition-mount.service... Oct 2 19:13:26.872384 systemd[1]: Stopping iscsid.service... Oct 2 19:13:26.872580 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:13:26.872735 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:13:26.873287 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:13:26.873515 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:13:26.873678 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:13:26.873966 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:13:26.874147 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:13:26.874276 iscsid[731]: iscsid shutting down. Oct 2 19:13:26.875944 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:13:26.876146 systemd[1]: Stopped iscsid.service. Oct 2 19:13:26.876414 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:13:26.876563 systemd[1]: Closed iscsid.socket. Oct 2 19:13:26.877191 systemd[1]: Stopping iscsiuio.service... Oct 2 19:13:26.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.879126 ignition[881]: INFO : Ignition 2.14.0 Oct 2 19:13:26.879126 ignition[881]: INFO : Stage: umount Oct 2 19:13:26.879126 ignition[881]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:13:26.879126 ignition[881]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:13:26.878139 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:13:26.878182 systemd[1]: Stopped iscsiuio.service. Oct 2 19:13:26.878312 systemd[1]: Stopped target network.target. Oct 2 19:13:26.878404 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:13:26.880125 ignition[881]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:13:26.878421 systemd[1]: Closed iscsiuio.socket. Oct 2 19:13:26.878548 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:13:26.878668 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:13:26.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.881305 ignition[881]: INFO : umount: umount passed Oct 2 19:13:26.881305 ignition[881]: INFO : Ignition finished successfully Oct 2 19:13:26.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.881619 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:13:26.881663 systemd[1]: Stopped ignition-mount.service. Oct 2 19:13:26.881794 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:13:26.881815 systemd[1]: Stopped ignition-disks.service. Oct 2 19:13:26.881913 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:13:26.881931 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:13:26.882027 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:13:26.882046 systemd[1]: Stopped ignition-setup.service. Oct 2 19:13:26.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.887411 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:13:26.888837 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:13:26.888887 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:13:26.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.889000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:13:26.890739 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:13:26.890789 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:13:26.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.891040 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:13:26.891056 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:13:26.891694 systemd[1]: Stopping network-cleanup.service... Oct 2 19:13:26.891800 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:13:26.891826 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:13:26.891957 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 2 19:13:26.891977 systemd[1]: Stopped afterburn-network-kargs.service. Oct 2 19:13:26.892094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:13:26.892115 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:13:26.892264 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:13:26.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.892282 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:13:26.892419 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:13:26.893089 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:13:26.895236 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:13:26.895298 systemd[1]: Stopped network-cleanup.service. Oct 2 19:13:26.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.895000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:13:26.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.896363 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:13:26.896414 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:13:26.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.896583 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:13:26.896605 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:13:26.896853 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:13:26.896914 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:13:26.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.897262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:13:26.897283 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:13:26.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.897524 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:13:26.897547 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:13:26.897682 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:13:26.897704 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:13:26.897870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:13:26.897888 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:13:26.898031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:13:26.898051 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:13:26.898687 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:13:26.898880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:13:26.898911 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:13:26.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:26.902391 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:13:26.902446 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:13:26.902591 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:13:26.903039 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:13:26.909314 systemd[1]: Switching root. Oct 2 19:13:26.924104 systemd-journald[216]: Journal stopped Oct 2 19:13:28.978850 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Oct 2 19:13:28.978872 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:13:28.978880 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:13:28.978886 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:13:28.978892 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:13:28.980371 kernel: SELinux: policy capability open_perms=1 Oct 2 19:13:28.980384 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:13:28.980391 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:13:28.980396 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:13:28.980402 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:13:28.980408 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:13:28.980414 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:13:28.980422 systemd[1]: Successfully loaded SELinux policy in 39.794ms. Oct 2 19:13:28.980431 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.148ms. Oct 2 19:13:28.980440 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:13:28.980447 systemd[1]: Detected virtualization vmware. Oct 2 19:13:28.980454 systemd[1]: Detected architecture x86-64. Oct 2 19:13:28.980461 systemd[1]: Detected first boot. Oct 2 19:13:28.980468 systemd[1]: Initializing machine ID from random generator. Oct 2 19:13:28.980474 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:13:28.980480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:13:28.980487 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:13:28.980495 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:13:28.980503 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:13:28.980510 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:13:28.980516 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:13:28.980523 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:13:28.980530 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:13:28.980536 systemd[1]: Created slice system-getty.slice. Oct 2 19:13:28.980542 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:13:28.980549 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:13:28.980557 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:13:28.980564 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:13:28.980570 systemd[1]: Created slice user.slice. Oct 2 19:13:28.980577 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:13:28.980584 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:13:28.980590 systemd[1]: Set up automount boot.automount. Oct 2 19:13:28.980597 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:13:28.980603 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:13:28.980609 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:13:28.980618 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:13:28.980626 systemd[1]: Reached target integritysetup.target. Oct 2 19:13:28.980633 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:13:28.980640 systemd[1]: Reached target remote-fs.target. Oct 2 19:13:28.980646 systemd[1]: Reached target slices.target. Oct 2 19:13:28.980653 systemd[1]: Reached target swap.target. Oct 2 19:13:28.980660 systemd[1]: Reached target torcx.target. Oct 2 19:13:28.980667 systemd[1]: Reached target veritysetup.target. Oct 2 19:13:28.980675 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:13:28.980701 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:13:28.980710 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:13:28.980717 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:13:28.980724 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:13:28.980733 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:13:28.980740 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:13:28.980747 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:13:28.980755 systemd[1]: Mounting media.mount... Oct 2 19:13:28.980762 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:13:28.980769 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:13:28.980776 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:13:28.980783 systemd[1]: Mounting tmp.mount... Oct 2 19:13:28.980791 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:13:28.980798 systemd[1]: Starting ignition-delete-config.service... Oct 2 19:13:28.980805 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:13:28.980812 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:13:28.980818 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:13:28.980825 systemd[1]: Starting modprobe@drm.service... Oct 2 19:13:28.980832 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:13:28.980839 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:13:28.980846 systemd[1]: Starting modprobe@loop.service... Oct 2 19:13:28.980855 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:13:28.980862 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:13:28.980869 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:13:28.980875 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:13:28.980882 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:13:28.980889 systemd[1]: Stopped systemd-journald.service. Oct 2 19:13:28.980895 systemd[1]: Starting systemd-journald.service... Oct 2 19:13:28.980903 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:13:28.980910 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:13:28.980918 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:13:28.980925 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:13:28.980932 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:13:28.980939 systemd[1]: Stopped verity-setup.service. Oct 2 19:13:28.980946 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:13:28.980953 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:13:28.980959 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:13:28.980966 systemd[1]: Mounted media.mount. Oct 2 19:13:28.980973 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:13:28.980981 kernel: fuse: init (API version 7.34) Oct 2 19:13:28.980988 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:13:28.981804 systemd[1]: Mounted tmp.mount. Oct 2 19:13:28.981815 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:13:28.981823 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:13:28.981830 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:13:28.981837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:13:28.981844 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:13:28.981851 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:13:28.981863 systemd-journald[1002]: Journal started Oct 2 19:13:28.981894 systemd-journald[1002]: Runtime Journal (/run/log/journal/26f688287d4f4b88b275a5a36feed2bf) is 4.8M, max 38.8M, 34.0M free. Oct 2 19:13:27.009000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:13:27.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:13:27.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:13:27.122000 audit: BPF prog-id=10 op=LOAD Oct 2 19:13:27.122000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:13:27.122000 audit: BPF prog-id=11 op=LOAD Oct 2 19:13:27.122000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:13:28.982380 systemd[1]: Finished modprobe@drm.service. Oct 2 19:13:28.881000 audit: BPF prog-id=12 op=LOAD Oct 2 19:13:28.881000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:13:28.881000 audit: BPF prog-id=13 op=LOAD Oct 2 19:13:28.881000 audit: BPF prog-id=14 op=LOAD Oct 2 19:13:28.881000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:13:28.881000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:13:28.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.886000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:13:28.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.950000 audit: BPF prog-id=15 op=LOAD Oct 2 19:13:28.950000 audit: BPF prog-id=16 op=LOAD Oct 2 19:13:28.950000 audit: BPF prog-id=17 op=LOAD Oct 2 19:13:28.950000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:13:28.950000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:13:28.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.973000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:13:28.973000 audit[1002]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fffaa135780 a2=4000 a3=7fffaa13581c items=0 ppid=1 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:13:28.973000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:13:28.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.880797 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:13:27.209915 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:13:28.983786 systemd[1]: Started systemd-journald.service. Oct 2 19:13:28.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.883405 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:13:27.210293 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:13:28.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:27.210305 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:13:28.984227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:13:28.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:27.210325 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:13:28.984295 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:13:27.210331 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:13:28.984492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:13:27.210350 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:13:28.984555 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:13:27.210357 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:13:28.984846 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:13:27.210488 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:13:28.987143 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:13:28.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:28.987288 jq[981]: true Oct 2 19:13:27.210511 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:13:27.210519 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:13:27.211008 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:13:28.987462 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:13:27.211027 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:13:28.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:27.211038 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:13:27.211050 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:13:27.211060 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:13:27.211067 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:13:28.696306 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:28Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:13:28.696462 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:28Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:13:28.696528 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:28Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:13:28.696625 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:28Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:13:28.988047 jq[1011]: true Oct 2 19:13:28.696655 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:28Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:13:28.696695 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2023-10-02T19:13:28Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:13:28.988517 systemd[1]: Reached target network-pre.target. Oct 2 19:13:28.989535 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:13:28.992013 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:13:28.992320 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:13:28.994182 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:13:28.994899 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:13:28.995015 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:13:28.995648 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:13:28.996444 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:13:28.998036 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:13:28.998486 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:13:29.001434 kernel: loop: module loaded Oct 2 19:13:29.001696 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:13:29.001773 systemd[1]: Finished modprobe@loop.service. Oct 2 19:13:29.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.001957 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:13:29.014272 systemd-journald[1002]: Time spent on flushing to /var/log/journal/26f688287d4f4b88b275a5a36feed2bf is 58.130ms for 2000 entries. Oct 2 19:13:29.014272 systemd-journald[1002]: System Journal (/var/log/journal/26f688287d4f4b88b275a5a36feed2bf) is 8.0M, max 584.8M, 576.8M free. Oct 2 19:13:29.091839 systemd-journald[1002]: Received client request to flush runtime journal. Oct 2 19:13:29.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.017283 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:13:29.018227 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:13:29.031153 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:13:29.093781 udevadm[1043]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:13:29.031313 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:13:29.041959 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:13:29.067598 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:13:29.082771 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:13:29.083679 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:13:29.092756 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:13:29.153320 ignition[1012]: Ignition 2.14.0 Oct 2 19:13:29.153540 ignition[1012]: deleting config from guestinfo properties Oct 2 19:13:29.156250 ignition[1012]: Successfully deleted config Oct 2 19:13:29.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.156891 systemd[1]: Finished ignition-delete-config.service. Oct 2 19:13:29.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.461000 audit: BPF prog-id=18 op=LOAD Oct 2 19:13:29.461000 audit: BPF prog-id=19 op=LOAD Oct 2 19:13:29.461000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:13:29.461000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:13:29.461588 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:13:29.462912 systemd[1]: Starting systemd-udevd.service... Oct 2 19:13:29.475542 systemd-udevd[1046]: Using default interface naming scheme 'v252'. Oct 2 19:13:29.495887 systemd[1]: Started systemd-udevd.service. Oct 2 19:13:29.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.495000 audit: BPF prog-id=20 op=LOAD Oct 2 19:13:29.497107 systemd[1]: Starting systemd-networkd.service... Oct 2 19:13:29.502000 audit: BPF prog-id=21 op=LOAD Oct 2 19:13:29.502000 audit: BPF prog-id=22 op=LOAD Oct 2 19:13:29.502000 audit: BPF prog-id=23 op=LOAD Oct 2 19:13:29.503682 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:13:29.526655 systemd[1]: Started systemd-userdbd.service. Oct 2 19:13:29.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.526917 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:13:29.558103 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:13:29.565106 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:13:29.583401 systemd-networkd[1048]: lo: Link UP Oct 2 19:13:29.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.583406 systemd-networkd[1048]: lo: Gained carrier Oct 2 19:13:29.583662 systemd-networkd[1048]: Enumeration completed Oct 2 19:13:29.583716 systemd[1]: Started systemd-networkd.service. Oct 2 19:13:29.584012 systemd-networkd[1048]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 2 19:13:29.586793 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 2 19:13:29.586907 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 2 19:13:29.586986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Oct 2 19:13:29.588006 systemd-networkd[1048]: ens192: Link UP Oct 2 19:13:29.588091 systemd-networkd[1048]: ens192: Gained carrier Oct 2 19:13:29.615000 audit[1047]: AVC avc: denied { confidentiality } for pid=1047 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:13:29.625089 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Oct 2 19:13:29.615000 audit[1047]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563f63506da0 a1=32194 a2=7f258cf01bc5 a3=5 items=106 ppid=1046 pid=1047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:13:29.615000 audit: CWD cwd="/" Oct 2 19:13:29.615000 audit: PATH item=0 name=(null) inode=17099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=1 name=(null) inode=17100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=2 name=(null) inode=17099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=3 name=(null) inode=17101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=4 name=(null) inode=17099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=5 name=(null) inode=17102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=6 name=(null) inode=17102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=7 name=(null) inode=17103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=8 name=(null) inode=17102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=9 name=(null) inode=17104 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=10 name=(null) inode=17102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=11 name=(null) inode=17105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=12 name=(null) inode=17102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=13 name=(null) inode=17106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=14 name=(null) inode=17102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=15 name=(null) inode=17107 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=16 name=(null) inode=17099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=17 name=(null) inode=17108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=18 name=(null) inode=17108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=19 name=(null) inode=17109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=20 name=(null) inode=17108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=21 name=(null) inode=17110 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=22 name=(null) inode=17108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=23 name=(null) inode=17111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=24 name=(null) inode=17108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=25 name=(null) inode=17112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=26 name=(null) inode=17108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=27 name=(null) inode=17113 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=28 name=(null) inode=17099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=29 name=(null) inode=17114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=30 name=(null) inode=17114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=31 name=(null) inode=17115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=32 name=(null) inode=17114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=33 name=(null) inode=17116 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=34 name=(null) inode=17114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=35 name=(null) inode=17117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=36 name=(null) inode=17114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=37 name=(null) inode=17118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=38 name=(null) inode=17114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=39 name=(null) inode=17119 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=40 name=(null) inode=17099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=41 name=(null) inode=17120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=42 name=(null) inode=17120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=43 name=(null) inode=17121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=44 name=(null) inode=17120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=45 name=(null) inode=17122 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=46 name=(null) inode=17120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=47 name=(null) inode=17123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=48 name=(null) inode=17120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=49 name=(null) inode=17124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=50 name=(null) inode=17120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=51 name=(null) inode=17125 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=52 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=53 name=(null) inode=17126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=54 name=(null) inode=17126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=55 name=(null) inode=17127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=56 name=(null) inode=17126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=57 name=(null) inode=17128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=58 name=(null) inode=17126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=59 name=(null) inode=17129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=60 name=(null) inode=17129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=61 name=(null) inode=17130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=62 name=(null) inode=17129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=63 name=(null) inode=17131 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=64 name=(null) inode=17129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=65 name=(null) inode=17132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=66 name=(null) inode=17129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=67 name=(null) inode=17133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=68 name=(null) inode=17129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=69 name=(null) inode=17134 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=70 name=(null) inode=17126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=71 name=(null) inode=17135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=72 name=(null) inode=17135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=73 name=(null) inode=17136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=74 name=(null) inode=17135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=75 name=(null) inode=17137 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=76 name=(null) inode=17135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=77 name=(null) inode=17138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=78 name=(null) inode=17135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=79 name=(null) inode=17139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=80 name=(null) inode=17135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=81 name=(null) inode=17140 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=82 name=(null) inode=17126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=83 name=(null) inode=17141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=84 name=(null) inode=17141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=85 name=(null) inode=17142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=86 name=(null) inode=17141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=87 name=(null) inode=17143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=88 name=(null) inode=17141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=89 name=(null) inode=17144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=90 name=(null) inode=17141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=91 name=(null) inode=17145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=92 name=(null) inode=17141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=93 name=(null) inode=17146 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=94 name=(null) inode=17126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=95 name=(null) inode=17147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=96 name=(null) inode=17147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=97 name=(null) inode=17148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=98 name=(null) inode=17147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=99 name=(null) inode=17149 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=100 name=(null) inode=17147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=101 name=(null) inode=17150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=102 name=(null) inode=17147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=103 name=(null) inode=17151 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=104 name=(null) inode=17147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PATH item=105 name=(null) inode=17152 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:13:29.615000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:13:29.636145 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 2 19:13:29.638095 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 2 19:13:29.646522 kernel: Guest personality initialized and is active Oct 2 19:13:29.646569 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 2 19:13:29.646583 kernel: Initialized host personality Oct 2 19:13:29.655147 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1054) Oct 2 19:13:29.671094 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:13:29.672616 (udev-worker)[1058]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 2 19:13:29.674086 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:13:29.678483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:13:29.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.702287 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:13:29.703208 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:13:29.721183 lvm[1079]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:13:29.744602 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:13:29.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.744782 systemd[1]: Reached target cryptsetup.target. Oct 2 19:13:29.745696 systemd[1]: Starting lvm2-activation.service... Oct 2 19:13:29.748140 lvm[1080]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:13:29.770602 systemd[1]: Finished lvm2-activation.service. Oct 2 19:13:29.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.770777 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:13:29.770880 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:13:29.770896 systemd[1]: Reached target local-fs.target. Oct 2 19:13:29.770989 systemd[1]: Reached target machines.target. Oct 2 19:13:29.771918 systemd[1]: Starting ldconfig.service... Oct 2 19:13:29.774064 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:13:29.774120 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:13:29.774816 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:13:29.775447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:13:29.776727 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:13:29.776889 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:13:29.776918 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:13:29.779644 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:13:29.781538 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1082 (bootctl) Oct 2 19:13:29.782110 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:13:29.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:29.791805 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:13:29.894485 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:13:29.979491 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:13:30.059740 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:13:30.061496 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:13:30.062277 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:13:30.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.100717 systemd-fsck[1090]: fsck.fat 4.2 (2021-01-31) Oct 2 19:13:30.100717 systemd-fsck[1090]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 19:13:30.102343 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:13:30.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.103431 systemd[1]: Mounting boot.mount... Oct 2 19:13:30.127391 systemd[1]: Mounted boot.mount. Oct 2 19:13:30.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.153965 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:13:30.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.210511 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:13:30.212000 audit: BPF prog-id=24 op=LOAD Oct 2 19:13:30.211633 systemd[1]: Starting audit-rules.service... Oct 2 19:13:30.212419 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:13:30.213183 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:13:30.215942 systemd[1]: Starting systemd-resolved.service... Oct 2 19:13:30.217000 audit: BPF prog-id=25 op=LOAD Oct 2 19:13:30.219509 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:13:30.222042 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:13:30.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.231629 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:13:30.231801 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:13:30.232000 audit[1104]: SYSTEM_BOOT pid=1104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.234415 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:13:30.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:13:30.245044 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:13:30.269000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:13:30.269000 audit[1113]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffcc4f7c20 a2=420 a3=0 items=0 ppid=1093 pid=1113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:13:30.269000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:13:30.270891 augenrules[1113]: No rules Oct 2 19:13:30.271346 systemd[1]: Finished audit-rules.service. Oct 2 19:13:30.276527 systemd-resolved[1097]: Positive Trust Anchors: Oct 2 19:13:30.276735 systemd-resolved[1097]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:13:30.276799 systemd-resolved[1097]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:13:30.276920 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:13:30.277098 systemd[1]: Reached target time-set.target. Oct 2 19:13:30.299938 systemd-resolved[1097]: Defaulting to hostname 'linux'. Oct 2 19:13:30.301021 systemd[1]: Started systemd-resolved.service. Oct 2 19:13:30.301173 systemd[1]: Reached target network.target. Oct 2 19:13:30.301277 systemd[1]: Reached target nss-lookup.target. Oct 2 19:13:30.403903 ldconfig[1081]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:13:30.426314 systemd[1]: Finished ldconfig.service. Oct 2 19:14:04.571595 systemd-timesyncd[1101]: Contacted time server 129.250.35.251:123 (0.flatcar.pool.ntp.org). Oct 2 19:14:04.571768 systemd-timesyncd[1101]: Initial clock synchronization to Mon 2023-10-02 19:14:04.571545 UTC. Oct 2 19:14:04.571854 systemd-resolved[1097]: Clock change detected. Flushing caches. Oct 2 19:14:04.571983 systemd[1]: Starting systemd-update-done.service... Oct 2 19:14:04.582579 systemd[1]: Finished systemd-update-done.service. Oct 2 19:14:04.582751 systemd[1]: Reached target sysinit.target. Oct 2 19:14:04.582885 systemd[1]: Started motdgen.path. Oct 2 19:14:04.582985 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:14:04.583159 systemd[1]: Started logrotate.timer. Oct 2 19:14:04.583294 systemd[1]: Started mdadm.timer. Oct 2 19:14:04.583374 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:14:04.583464 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:14:04.583483 systemd[1]: Reached target paths.target. Oct 2 19:14:04.583563 systemd[1]: Reached target timers.target. Oct 2 19:14:04.583799 systemd[1]: Listening on dbus.socket. Oct 2 19:14:04.584544 systemd[1]: Starting docker.socket... Oct 2 19:14:04.593497 systemd[1]: Listening on sshd.socket. Oct 2 19:14:04.593646 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:14:04.593927 systemd[1]: Listening on docker.socket. Oct 2 19:14:04.594054 systemd[1]: Reached target sockets.target. Oct 2 19:14:04.594142 systemd[1]: Reached target basic.target. Oct 2 19:14:04.594253 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:14:04.594270 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:14:04.594907 systemd[1]: Starting containerd.service... Oct 2 19:14:04.595596 systemd[1]: Starting dbus.service... Oct 2 19:14:04.596284 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:14:04.597389 systemd[1]: Starting extend-filesystems.service... Oct 2 19:14:04.597514 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:14:04.598309 jq[1124]: false Oct 2 19:14:04.598387 systemd[1]: Starting motdgen.service... Oct 2 19:14:04.599821 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:14:04.601400 systemd[1]: Starting prepare-critools.service... Oct 2 19:14:04.602433 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:14:04.605761 systemd[1]: Starting sshd-keygen.service... Oct 2 19:14:04.607437 systemd[1]: Starting systemd-logind.service... Oct 2 19:14:04.607546 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:14:04.607575 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:14:04.615294 jq[1136]: true Oct 2 19:14:04.607999 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:14:04.608349 systemd[1]: Starting update-engine.service... Oct 2 19:14:04.609174 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:14:04.610927 systemd[1]: Starting vmtoolsd.service... Oct 2 19:14:04.613199 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:14:04.613304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:14:04.614113 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:14:04.614204 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:14:04.627976 jq[1143]: true Oct 2 19:14:04.633664 tar[1141]: ./ Oct 2 19:14:04.633664 tar[1141]: ./loopback Oct 2 19:14:04.630296 systemd[1]: Started vmtoolsd.service. Oct 2 19:14:04.638729 tar[1142]: crictl Oct 2 19:14:04.645957 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:14:04.646053 systemd[1]: Finished motdgen.service. Oct 2 19:14:04.649864 extend-filesystems[1125]: Found sda Oct 2 19:14:04.651777 extend-filesystems[1125]: Found sda1 Oct 2 19:14:04.652695 extend-filesystems[1125]: Found sda2 Oct 2 19:14:04.652856 extend-filesystems[1125]: Found sda3 Oct 2 19:14:04.652999 extend-filesystems[1125]: Found usr Oct 2 19:14:04.653138 extend-filesystems[1125]: Found sda4 Oct 2 19:14:04.653293 extend-filesystems[1125]: Found sda6 Oct 2 19:14:04.653428 extend-filesystems[1125]: Found sda7 Oct 2 19:14:04.653567 extend-filesystems[1125]: Found sda9 Oct 2 19:14:04.653701 extend-filesystems[1125]: Checking size of /dev/sda9 Oct 2 19:14:04.670820 tar[1141]: ./bandwidth Oct 2 19:14:04.686090 bash[1173]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:14:04.686488 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:14:04.697762 systemd-logind[1134]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:14:04.697776 systemd-logind[1134]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:14:04.697879 systemd-logind[1134]: New seat seat0. Oct 2 19:14:04.704031 kernel: NET: Registered PF_VSOCK protocol family Oct 2 19:14:04.704504 dbus-daemon[1123]: [system] SELinux support is enabled Oct 2 19:14:04.704586 systemd[1]: Started dbus.service. Oct 2 19:14:04.705793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:14:04.705808 systemd[1]: Reached target system-config.target. Oct 2 19:14:04.705921 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:14:04.705931 systemd[1]: Reached target user-config.target. Oct 2 19:14:04.710253 systemd[1]: Started systemd-logind.service. Oct 2 19:14:04.714894 env[1145]: time="2023-10-02T19:14:04.714855003Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:14:04.715175 extend-filesystems[1125]: Old size kept for /dev/sda9 Oct 2 19:14:04.715960 update_engine[1135]: I1002 19:14:04.715462 1135 main.cc:92] Flatcar Update Engine starting Oct 2 19:14:04.716399 extend-filesystems[1125]: Found sr0 Oct 2 19:14:04.717757 systemd[1]: Started update-engine.service. Oct 2 19:14:04.718121 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:14:04.718250 systemd[1]: Finished extend-filesystems.service. Oct 2 19:14:04.719806 systemd[1]: Started locksmithd.service. Oct 2 19:14:04.720536 update_engine[1135]: I1002 19:14:04.720518 1135 update_check_scheduler.cc:74] Next update check in 2m47s Oct 2 19:14:04.761551 tar[1141]: ./ptp Oct 2 19:14:04.772177 env[1145]: time="2023-10-02T19:14:04.771825129Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:14:04.777987 env[1145]: time="2023-10-02T19:14:04.777748507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:04.785202 env[1145]: time="2023-10-02T19:14:04.783836737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:14:04.785202 env[1145]: time="2023-10-02T19:14:04.784080152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:04.785202 env[1145]: time="2023-10-02T19:14:04.784626695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:14:04.785202 env[1145]: time="2023-10-02T19:14:04.784639315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:04.785202 env[1145]: time="2023-10-02T19:14:04.784648464Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:14:04.785202 env[1145]: time="2023-10-02T19:14:04.784654163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:04.785202 env[1145]: time="2023-10-02T19:14:04.784709022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:04.787185 env[1145]: time="2023-10-02T19:14:04.787164100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:04.787312 env[1145]: time="2023-10-02T19:14:04.787285353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:14:04.787312 env[1145]: time="2023-10-02T19:14:04.787299219Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:14:04.787368 env[1145]: time="2023-10-02T19:14:04.787347246Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:14:04.787368 env[1145]: time="2023-10-02T19:14:04.787356469Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:14:04.794155 env[1145]: time="2023-10-02T19:14:04.794125469Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:14:04.794155 env[1145]: time="2023-10-02T19:14:04.794153971Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:14:04.794155 env[1145]: time="2023-10-02T19:14:04.794162374Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794189821Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794204724Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794215762Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794225049Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794232495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794240475Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794247410Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794254753Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794276 env[1145]: time="2023-10-02T19:14:04.794262079Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:14:04.794412 env[1145]: time="2023-10-02T19:14:04.794338584Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:14:04.794412 env[1145]: time="2023-10-02T19:14:04.794386584Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:14:04.794550 env[1145]: time="2023-10-02T19:14:04.794537037Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:14:04.794584 env[1145]: time="2023-10-02T19:14:04.794559770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794584 env[1145]: time="2023-10-02T19:14:04.794570429Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:14:04.794636 env[1145]: time="2023-10-02T19:14:04.794603506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794636 env[1145]: time="2023-10-02T19:14:04.794612566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794636 env[1145]: time="2023-10-02T19:14:04.794620124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794636 env[1145]: time="2023-10-02T19:14:04.794626393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794636 env[1145]: time="2023-10-02T19:14:04.794633535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794735 env[1145]: time="2023-10-02T19:14:04.794640243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794735 env[1145]: time="2023-10-02T19:14:04.794646620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794735 env[1145]: time="2023-10-02T19:14:04.794653151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794735 env[1145]: time="2023-10-02T19:14:04.794660978Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:14:04.794817 env[1145]: time="2023-10-02T19:14:04.794747931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794817 env[1145]: time="2023-10-02T19:14:04.794757528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794817 env[1145]: time="2023-10-02T19:14:04.794764421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794817 env[1145]: time="2023-10-02T19:14:04.794770997Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:14:04.794817 env[1145]: time="2023-10-02T19:14:04.794780060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:14:04.794817 env[1145]: time="2023-10-02T19:14:04.794786308Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:14:04.794817 env[1145]: time="2023-10-02T19:14:04.794802245Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:14:04.794931 env[1145]: time="2023-10-02T19:14:04.794827124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:14:04.794999 env[1145]: time="2023-10-02T19:14:04.794959737Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:14:04.799461 env[1145]: time="2023-10-02T19:14:04.795001705Z" level=info msg="Connect containerd service" Oct 2 19:14:04.799461 env[1145]: time="2023-10-02T19:14:04.795027416Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:14:04.799461 env[1145]: time="2023-10-02T19:14:04.795372936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:14:04.799461 env[1145]: time="2023-10-02T19:14:04.795532530Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:14:04.799461 env[1145]: time="2023-10-02T19:14:04.795559096Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:14:04.795630 systemd[1]: Started containerd.service. Oct 2 19:14:04.799872 env[1145]: time="2023-10-02T19:14:04.799856207Z" level=info msg="containerd successfully booted in 0.086673s" Oct 2 19:14:04.800737 env[1145]: time="2023-10-02T19:14:04.800222123Z" level=info msg="Start subscribing containerd event" Oct 2 19:14:04.800737 env[1145]: time="2023-10-02T19:14:04.800268557Z" level=info msg="Start recovering state" Oct 2 19:14:04.800737 env[1145]: time="2023-10-02T19:14:04.800309386Z" level=info msg="Start event monitor" Oct 2 19:14:04.800737 env[1145]: time="2023-10-02T19:14:04.800324407Z" level=info msg="Start snapshots syncer" Oct 2 19:14:04.800737 env[1145]: time="2023-10-02T19:14:04.800331243Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:14:04.800737 env[1145]: time="2023-10-02T19:14:04.800337894Z" level=info msg="Start streaming server" Oct 2 19:14:04.816484 tar[1141]: ./vlan Oct 2 19:14:04.863682 tar[1141]: ./host-device Oct 2 19:14:04.887940 tar[1141]: ./tuning Oct 2 19:14:04.909543 tar[1141]: ./vrf Oct 2 19:14:04.939368 tar[1141]: ./sbr Oct 2 19:14:04.986413 tar[1141]: ./tap Oct 2 19:14:05.036272 tar[1141]: ./dhcp Oct 2 19:14:05.155700 systemd[1]: Finished prepare-critools.service. Oct 2 19:14:05.162755 tar[1141]: ./static Oct 2 19:14:05.170999 systemd-networkd[1048]: ens192: Gained IPv6LL Oct 2 19:14:05.182214 tar[1141]: ./firewall Oct 2 19:14:05.210603 tar[1141]: ./macvlan Oct 2 19:14:05.236342 tar[1141]: ./dummy Oct 2 19:14:05.262110 tar[1141]: ./bridge Oct 2 19:14:05.292795 tar[1141]: ./ipvlan Oct 2 19:14:05.319891 tar[1141]: ./portmap Oct 2 19:14:05.344266 tar[1141]: ./host-local Oct 2 19:14:05.373122 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:14:05.410741 sshd_keygen[1156]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:14:05.425774 locksmithd[1185]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:14:05.426570 systemd[1]: Finished sshd-keygen.service. Oct 2 19:14:05.427729 systemd[1]: Starting issuegen.service... Oct 2 19:14:05.430859 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:14:05.430952 systemd[1]: Finished issuegen.service. Oct 2 19:14:05.431947 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:14:05.435659 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:14:05.436605 systemd[1]: Started getty@tty1.service. Oct 2 19:14:05.437395 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:14:05.437590 systemd[1]: Reached target getty.target. Oct 2 19:14:05.437730 systemd[1]: Reached target multi-user.target. Oct 2 19:14:05.438666 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:14:05.443372 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:14:05.443466 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:14:05.443638 systemd[1]: Startup finished in 926ms (kernel) + 12.415s (initrd) + 4.339s (userspace) = 17.680s. Oct 2 19:14:05.466728 login[1254]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Oct 2 19:14:05.468157 login[1255]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:14:05.475075 systemd[1]: Created slice user-500.slice. Oct 2 19:14:05.475883 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:14:05.478655 systemd-logind[1134]: New session 1 of user core. Oct 2 19:14:05.481394 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:14:05.482362 systemd[1]: Starting user@500.service... Oct 2 19:14:05.484514 (systemd)[1258]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:05.527944 systemd[1258]: Queued start job for default target default.target. Oct 2 19:14:05.528438 systemd[1258]: Reached target paths.target. Oct 2 19:14:05.528515 systemd[1258]: Reached target sockets.target. Oct 2 19:14:05.528575 systemd[1258]: Reached target timers.target. Oct 2 19:14:05.528633 systemd[1258]: Reached target basic.target. Oct 2 19:14:05.528711 systemd[1258]: Reached target default.target. Oct 2 19:14:05.528755 systemd[1]: Started user@500.service. Oct 2 19:14:05.528821 systemd[1258]: Startup finished in 40ms. Oct 2 19:14:05.529524 systemd[1]: Started session-1.scope. Oct 2 19:14:06.466978 login[1254]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:14:06.469629 systemd-logind[1134]: New session 2 of user core. Oct 2 19:14:06.470487 systemd[1]: Started session-2.scope. Oct 2 19:14:44.771509 systemd[1]: Created slice system-sshd.slice. Oct 2 19:14:44.772157 systemd[1]: Started sshd@0-139.178.70.107:22-86.109.11.97:52998.service. Oct 2 19:14:44.807691 sshd[1282]: Accepted publickey for core from 86.109.11.97 port 52998 ssh2: RSA SHA256:4HsabNeLOY7T7hq+vAGv8q6phBRuHhOefapqVnqBG5k Oct 2 19:14:44.808529 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:44.812406 systemd[1]: Started session-3.scope. Oct 2 19:14:44.813278 systemd-logind[1134]: New session 3 of user core. Oct 2 19:14:44.862122 systemd[1]: Started sshd@1-139.178.70.107:22-86.109.11.97:53002.service. Oct 2 19:14:44.888292 sshd[1287]: Accepted publickey for core from 86.109.11.97 port 53002 ssh2: RSA SHA256:4HsabNeLOY7T7hq+vAGv8q6phBRuHhOefapqVnqBG5k Oct 2 19:14:44.889001 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:44.891475 systemd-logind[1134]: New session 4 of user core. Oct 2 19:14:44.891872 systemd[1]: Started session-4.scope. Oct 2 19:14:44.943240 sshd[1287]: pam_unix(sshd:session): session closed for user core Oct 2 19:14:44.946150 systemd[1]: Started sshd@2-139.178.70.107:22-86.109.11.97:53010.service. Oct 2 19:14:44.948323 systemd-logind[1134]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:14:44.948462 systemd[1]: sshd@1-139.178.70.107:22-86.109.11.97:53002.service: Deactivated successfully. Oct 2 19:14:44.948920 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:14:44.950055 systemd-logind[1134]: Removed session 4. Oct 2 19:14:44.974187 sshd[1292]: Accepted publickey for core from 86.109.11.97 port 53010 ssh2: RSA SHA256:4HsabNeLOY7T7hq+vAGv8q6phBRuHhOefapqVnqBG5k Oct 2 19:14:44.975033 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:44.977779 systemd[1]: Started session-5.scope. Oct 2 19:14:44.977971 systemd-logind[1134]: New session 5 of user core. Oct 2 19:14:45.026053 sshd[1292]: pam_unix(sshd:session): session closed for user core Oct 2 19:14:45.028796 systemd[1]: Started sshd@3-139.178.70.107:22-86.109.11.97:53018.service. Oct 2 19:14:45.029484 systemd[1]: sshd@2-139.178.70.107:22-86.109.11.97:53010.service: Deactivated successfully. Oct 2 19:14:45.030037 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:14:45.030901 systemd-logind[1134]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:14:45.031672 systemd-logind[1134]: Removed session 5. Oct 2 19:14:45.058745 sshd[1298]: Accepted publickey for core from 86.109.11.97 port 53018 ssh2: RSA SHA256:4HsabNeLOY7T7hq+vAGv8q6phBRuHhOefapqVnqBG5k Oct 2 19:14:45.059918 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:45.063027 systemd-logind[1134]: New session 6 of user core. Oct 2 19:14:45.063571 systemd[1]: Started session-6.scope. Oct 2 19:14:45.116290 systemd[1]: Started sshd@4-139.178.70.107:22-86.109.11.97:53026.service. Oct 2 19:14:45.116489 sshd[1298]: pam_unix(sshd:session): session closed for user core Oct 2 19:14:45.117982 systemd[1]: sshd@3-139.178.70.107:22-86.109.11.97:53018.service: Deactivated successfully. Oct 2 19:14:45.118324 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:14:45.118705 systemd-logind[1134]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:14:45.119235 systemd-logind[1134]: Removed session 6. Oct 2 19:14:45.143997 sshd[1304]: Accepted publickey for core from 86.109.11.97 port 53026 ssh2: RSA SHA256:4HsabNeLOY7T7hq+vAGv8q6phBRuHhOefapqVnqBG5k Oct 2 19:14:45.144709 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:45.147210 systemd-logind[1134]: New session 7 of user core. Oct 2 19:14:45.147643 systemd[1]: Started session-7.scope. Oct 2 19:14:45.211680 sudo[1308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:14:45.211807 sudo[1308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:14:45.217757 dbus-daemon[1123]: \xd0=\x89+\xc7U: received setenforce notice (enforcing=-606266880) Oct 2 19:14:45.217831 sudo[1308]: pam_unix(sudo:session): session closed for user root Oct 2 19:14:45.220107 sshd[1304]: pam_unix(sshd:session): session closed for user core Oct 2 19:14:45.222290 systemd[1]: Started sshd@5-139.178.70.107:22-86.109.11.97:53036.service. Oct 2 19:14:45.223463 systemd[1]: sshd@4-139.178.70.107:22-86.109.11.97:53026.service: Deactivated successfully. Oct 2 19:14:45.223909 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:14:45.224317 systemd-logind[1134]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:14:45.224926 systemd-logind[1134]: Removed session 7. Oct 2 19:14:45.250414 sshd[1311]: Accepted publickey for core from 86.109.11.97 port 53036 ssh2: RSA SHA256:4HsabNeLOY7T7hq+vAGv8q6phBRuHhOefapqVnqBG5k Oct 2 19:14:45.251184 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:45.253659 systemd-logind[1134]: New session 8 of user core. Oct 2 19:14:45.254121 systemd[1]: Started session-8.scope. Oct 2 19:14:45.302909 sudo[1316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:14:45.303622 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:14:45.305372 sudo[1316]: pam_unix(sudo:session): session closed for user root Oct 2 19:14:45.308051 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:14:45.308179 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:14:45.313563 systemd[1]: Stopping audit-rules.service... Oct 2 19:14:45.313000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:14:45.315645 kernel: kauditd_printk_skb: 225 callbacks suppressed Oct 2 19:14:45.315678 kernel: audit: type=1305 audit(1696274085.313:157): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:14:45.315786 auditctl[1319]: No rules Oct 2 19:14:45.313000 audit[1319]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdde2163e0 a2=420 a3=0 items=0 ppid=1 pid=1319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:45.315999 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:14:45.316087 systemd[1]: Stopped audit-rules.service. Oct 2 19:14:45.317459 systemd[1]: Starting audit-rules.service... Oct 2 19:14:45.320789 kernel: audit: type=1300 audit(1696274085.313:157): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdde2163e0 a2=420 a3=0 items=0 ppid=1 pid=1319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:45.313000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:14:45.321964 kernel: audit: type=1327 audit(1696274085.313:157): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:14:45.321993 kernel: audit: type=1131 audit(1696274085.315:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.330485 augenrules[1336]: No rules Oct 2 19:14:45.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.330972 systemd[1]: Finished audit-rules.service. Oct 2 19:14:45.331432 sudo[1315]: pam_unix(sudo:session): session closed for user root Oct 2 19:14:45.330000 audit[1315]: USER_END pid=1315 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.334523 systemd[1]: Started sshd@6-139.178.70.107:22-86.109.11.97:53040.service. Oct 2 19:14:45.337305 kernel: audit: type=1130 audit(1696274085.330:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.337340 kernel: audit: type=1106 audit(1696274085.330:160): pid=1315 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.337399 sshd[1311]: pam_unix(sshd:session): session closed for user core Oct 2 19:14:45.330000 audit[1315]: CRED_DISP pid=1315 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.340221 kernel: audit: type=1104 audit(1696274085.330:161): pid=1315 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.107:22-86.109.11.97:53040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.343173 kernel: audit: type=1130 audit(1696274085.333:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.107:22-86.109.11.97:53040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.344452 systemd[1]: sshd@5-139.178.70.107:22-86.109.11.97:53036.service: Deactivated successfully. Oct 2 19:14:45.344894 systemd[1]: session-8.scope: Deactivated successfully. Oct 2 19:14:45.345590 systemd-logind[1134]: Session 8 logged out. Waiting for processes to exit. Oct 2 19:14:45.346034 systemd-logind[1134]: Removed session 8. Oct 2 19:14:45.342000 audit[1311]: USER_END pid=1311 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.343000 audit[1311]: CRED_DISP pid=1311 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.354363 kernel: audit: type=1106 audit(1696274085.342:163): pid=1311 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.354430 kernel: audit: type=1104 audit(1696274085.343:164): pid=1311 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.107:22-86.109.11.97:53036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.370000 audit[1341]: USER_ACCT pid=1341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.370910 sshd[1341]: Accepted publickey for core from 86.109.11.97 port 53040 ssh2: RSA SHA256:4HsabNeLOY7T7hq+vAGv8q6phBRuHhOefapqVnqBG5k Oct 2 19:14:45.370000 audit[1341]: CRED_ACQ pid=1341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.370000 audit[1341]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8d12fce0 a2=3 a3=0 items=0 ppid=1 pid=1341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:45.370000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:14:45.371819 sshd[1341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:14:45.374776 systemd[1]: Started session-9.scope. Oct 2 19:14:45.375157 systemd-logind[1134]: New session 9 of user core. Oct 2 19:14:45.377000 audit[1341]: USER_START pid=1341 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.378000 audit[1344]: CRED_ACQ pid=1344 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:45.422000 audit[1345]: USER_ACCT pid=1345 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.423000 audit[1345]: CRED_REFR pid=1345 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.423746 sudo[1345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:14:45.423871 sudo[1345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:14:45.424000 audit[1345]: USER_START pid=1345 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.260882 systemd[1]: Reloading. Oct 2 19:14:46.307537 /usr/lib/systemd/system-generators/torcx-generator[1374]: time="2023-10-02T19:14:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:14:46.307756 /usr/lib/systemd/system-generators/torcx-generator[1374]: time="2023-10-02T19:14:46Z" level=info msg="torcx already run" Oct 2 19:14:46.358378 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:14:46.358390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:14:46.369650 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit: BPF prog-id=31 op=LOAD Oct 2 19:14:46.407000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit: BPF prog-id=32 op=LOAD Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit: BPF prog-id=33 op=LOAD Oct 2 19:14:46.407000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:14:46.407000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit: BPF prog-id=34 op=LOAD Oct 2 19:14:46.408000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit: BPF prog-id=35 op=LOAD Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit: BPF prog-id=36 op=LOAD Oct 2 19:14:46.408000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:14:46.408000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.408000 audit: BPF prog-id=37 op=LOAD Oct 2 19:14:46.408000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:14:46.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.409000 audit: BPF prog-id=38 op=LOAD Oct 2 19:14:46.409000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit: BPF prog-id=39 op=LOAD Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit: BPF prog-id=40 op=LOAD Oct 2 19:14:46.411000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:14:46.411000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.411000 audit: BPF prog-id=41 op=LOAD Oct 2 19:14:46.411000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit: BPF prog-id=42 op=LOAD Oct 2 19:14:46.412000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit: BPF prog-id=43 op=LOAD Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit: BPF prog-id=44 op=LOAD Oct 2 19:14:46.412000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:14:46.412000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.412000 audit: BPF prog-id=45 op=LOAD Oct 2 19:14:46.412000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:14:46.420894 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:14:46.425082 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:14:46.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.425454 systemd[1]: Reached target network-online.target. Oct 2 19:14:46.426638 systemd[1]: Started kubelet.service. Oct 2 19:14:46.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.431323 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 2 19:14:46.433173 systemd[1]: Starting coreos-metadata.service... Oct 2 19:14:46.451854 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:14:46.451967 systemd[1]: Finished coreos-metadata.service. Oct 2 19:14:46.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.480418 kubelet[1434]: E1002 19:14:46.480382 1434 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:14:46.482435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:14:46.482518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:14:46.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:14:46.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.698661 systemd[1]: Stopped kubelet.service. Oct 2 19:14:46.708539 systemd[1]: Reloading. Oct 2 19:14:46.758987 /usr/lib/systemd/system-generators/torcx-generator[1503]: time="2023-10-02T19:14:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:14:46.759003 /usr/lib/systemd/system-generators/torcx-generator[1503]: time="2023-10-02T19:14:46Z" level=info msg="torcx already run" Oct 2 19:14:46.822436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:14:46.822543 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:14:46.833752 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.872000 audit: BPF prog-id=46 op=LOAD Oct 2 19:14:46.872000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:14:46.872000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit: BPF prog-id=47 op=LOAD Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit: BPF prog-id=48 op=LOAD Oct 2 19:14:46.873000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:14:46.873000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.873000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit: BPF prog-id=49 op=LOAD Oct 2 19:14:46.874000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit: BPF prog-id=50 op=LOAD Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit: BPF prog-id=51 op=LOAD Oct 2 19:14:46.875000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:14:46.875000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.875000 audit: BPF prog-id=52 op=LOAD Oct 2 19:14:46.875000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.876000 audit: BPF prog-id=53 op=LOAD Oct 2 19:14:46.876000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.878000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit: BPF prog-id=54 op=LOAD Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.879000 audit: BPF prog-id=55 op=LOAD Oct 2 19:14:46.879000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:14:46.879000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit: BPF prog-id=56 op=LOAD Oct 2 19:14:46.880000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.880000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit: BPF prog-id=57 op=LOAD Oct 2 19:14:46.881000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit: BPF prog-id=58 op=LOAD Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.881000 audit: BPF prog-id=59 op=LOAD Oct 2 19:14:46.882000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:14:46.882000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:46.882000 audit: BPF prog-id=60 op=LOAD Oct 2 19:14:46.882000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:14:46.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.892137 systemd[1]: Started kubelet.service. Oct 2 19:14:46.922592 kubelet[1563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:14:46.922592 kubelet[1563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:14:46.922592 kubelet[1563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:14:46.922909 kubelet[1563]: I1002 19:14:46.922626 1563 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:14:47.155044 kubelet[1563]: I1002 19:14:47.155027 1563 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Oct 2 19:14:47.155044 kubelet[1563]: I1002 19:14:47.155044 1563 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:14:47.155192 kubelet[1563]: I1002 19:14:47.155179 1563 server.go:837] "Client rotation is on, will bootstrap in background" Oct 2 19:14:47.158148 kubelet[1563]: I1002 19:14:47.158137 1563 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:14:47.158256 kubelet[1563]: I1002 19:14:47.158165 1563 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:14:47.158447 kubelet[1563]: I1002 19:14:47.158440 1563 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:14:47.158539 kubelet[1563]: I1002 19:14:47.158529 1563 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:14:47.158651 kubelet[1563]: I1002 19:14:47.158635 1563 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:14:47.158695 kubelet[1563]: I1002 19:14:47.158688 1563 container_manager_linux.go:302] "Creating device plugin manager" Oct 2 19:14:47.158803 kubelet[1563]: I1002 19:14:47.158795 1563 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:14:47.168300 kubelet[1563]: I1002 19:14:47.168284 1563 kubelet.go:405] "Attempting to sync node with API server" Oct 2 19:14:47.168425 kubelet[1563]: I1002 19:14:47.168417 1563 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:14:47.168483 kubelet[1563]: I1002 19:14:47.168475 1563 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:14:47.168543 kubelet[1563]: I1002 19:14:47.168529 1563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:14:47.168731 kubelet[1563]: E1002 19:14:47.168577 1563 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:47.168731 kubelet[1563]: E1002 19:14:47.168594 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:47.169106 kubelet[1563]: I1002 19:14:47.169088 1563 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:14:47.169302 kubelet[1563]: W1002 19:14:47.169290 1563 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:14:47.169579 kubelet[1563]: I1002 19:14:47.169563 1563 server.go:1168] "Started kubelet" Oct 2 19:14:47.169000 audit[1563]: AVC avc: denied { mac_admin } for pid=1563 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:47.169000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:14:47.169000 audit[1563]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b652c0 a1=c00064b188 a2=c000b65260 a3=25 items=0 ppid=1 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.169000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:14:47.169000 audit[1563]: AVC avc: denied { mac_admin } for pid=1563 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:47.169000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:14:47.169000 audit[1563]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000de6a80 a1=c00064b1a0 a2=c000b65380 a3=25 items=0 ppid=1 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.169000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:14:47.170761 kubelet[1563]: I1002 19:14:47.170501 1563 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:14:47.170761 kubelet[1563]: I1002 19:14:47.170522 1563 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:14:47.170761 kubelet[1563]: I1002 19:14:47.170558 1563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:14:47.171696 kubelet[1563]: E1002 19:14:47.171685 1563 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:14:47.171743 kubelet[1563]: E1002 19:14:47.171698 1563 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:14:47.173382 kubelet[1563]: I1002 19:14:47.173357 1563 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:14:47.173880 kubelet[1563]: I1002 19:14:47.173865 1563 server.go:461] "Adding debug handlers to kubelet server" Oct 2 19:14:47.174511 kubelet[1563]: I1002 19:14:47.174497 1563 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:14:47.176230 kubelet[1563]: W1002 19:14:47.176214 1563 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.139" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:14:47.176278 kubelet[1563]: E1002 19:14:47.176235 1563 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.139" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:14:47.176317 kubelet[1563]: E1002 19:14:47.176263 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b8770aad4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 169534676, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 169534676, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.177178 kubelet[1563]: W1002 19:14:47.177160 1563 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:14:47.177221 kubelet[1563]: E1002 19:14:47.177180 1563 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:14:47.177674 kubelet[1563]: E1002 19:14:47.177640 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b87919d73", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 171693939, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 171693939, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.177750 kubelet[1563]: I1002 19:14:47.177695 1563 volume_manager.go:284] "Starting Kubelet Volume Manager" Oct 2 19:14:47.177792 kubelet[1563]: I1002 19:14:47.177776 1563 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Oct 2 19:14:47.177929 kubelet[1563]: E1002 19:14:47.177886 1563 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.139\" not found" Oct 2 19:14:47.178231 kubelet[1563]: W1002 19:14:47.178188 1563 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:14:47.178231 kubelet[1563]: E1002 19:14:47.178203 1563 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:14:47.178708 kubelet[1563]: E1002 19:14:47.178597 1563 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.124.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:14:47.189609 kubelet[1563]: I1002 19:14:47.189590 1563 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:14:47.189609 kubelet[1563]: I1002 19:14:47.189601 1563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:14:47.189609 kubelet[1563]: I1002 19:14:47.189609 1563 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:14:47.189892 kubelet[1563]: E1002 19:14:47.189848 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c6524", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.139 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189177636, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189177636, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.190264 kubelet[1563]: E1002 19:14:47.190229 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c732b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.139 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189181227, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189181227, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.190600 kubelet[1563]: E1002 19:14:47.190571 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c7836", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.139 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189182518, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189182518, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.196765 kubelet[1563]: I1002 19:14:47.196749 1563 policy_none.go:49] "None policy: Start" Oct 2 19:14:47.197129 kubelet[1563]: I1002 19:14:47.197117 1563 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:14:47.197174 kubelet[1563]: I1002 19:14:47.197133 1563 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:14:47.213289 systemd[1]: Created slice kubepods.slice. Oct 2 19:14:47.216313 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:14:47.218708 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:14:47.224193 kubelet[1563]: I1002 19:14:47.224173 1563 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:14:47.224258 kubelet[1563]: I1002 19:14:47.224214 1563 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:14:47.224351 kubelet[1563]: I1002 19:14:47.224340 1563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:14:47.223000 audit[1563]: AVC avc: denied { mac_admin } for pid=1563 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:47.223000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:14:47.223000 audit[1563]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d62930 a1=c00064bd58 a2=c000d62900 a3=25 items=0 ppid=1 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.223000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:14:47.225054 kubelet[1563]: E1002 19:14:47.225046 1563 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.139\" not found" Oct 2 19:14:47.226693 kubelet[1563]: E1002 19:14:47.226641 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b8acc7374", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 225881460, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 225881460, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.234000 audit[1578]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.234000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc70bc9d90 a2=0 a3=7ffc70bc9d7c items=0 ppid=1563 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:14:47.235000 audit[1580]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.235000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd5dae0d10 a2=0 a3=7ffd5dae0cfc items=0 ppid=1563 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:14:47.236000 audit[1582]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.236000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffbea14fe0 a2=0 a3=7fffbea14fcc items=0 ppid=1563 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:14:47.247000 audit[1587]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.247000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe376d10d0 a2=0 a3=7ffe376d10bc items=0 ppid=1563 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:14:47.274000 audit[1592]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.274000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff7942a990 a2=0 a3=7fff7942a97c items=0 ppid=1563 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:14:47.276092 kubelet[1563]: I1002 19:14:47.276080 1563 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:14:47.275000 audit[1593]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:47.276000 audit[1594]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.276000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6804b960 a2=0 a3=10e3 items=0 ppid=1563 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:14:47.275000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff00091c20 a2=0 a3=7fff00091c0c items=0 ppid=1563 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:14:47.277079 kubelet[1563]: I1002 19:14:47.277072 1563 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:14:47.277146 kubelet[1563]: I1002 19:14:47.277138 1563 status_manager.go:207] "Starting to sync pod status with apiserver" Oct 2 19:14:47.277243 kubelet[1563]: I1002 19:14:47.277236 1563 kubelet.go:2257] "Starting kubelet main sync loop" Oct 2 19:14:47.277314 kubelet[1563]: E1002 19:14:47.277308 1563 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:14:47.276000 audit[1595]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.276000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fffdc2d8390 a2=0 a3=7fffdc2d837c items=0 ppid=1563 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:14:47.278224 kubelet[1563]: W1002 19:14:47.278216 1563 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:14:47.278301 kubelet[1563]: E1002 19:14:47.278295 1563 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:14:47.278346 kubelet[1563]: I1002 19:14:47.278336 1563 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.139" Oct 2 19:14:47.278846 kubelet[1563]: E1002 19:14:47.278836 1563 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.139" Oct 2 19:14:47.278000 audit[1597]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:47.278000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7efa8840 a2=0 a3=7ffc7efa882c items=0 ppid=1563 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:14:47.279109 kubelet[1563]: E1002 19:14:47.279075 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c6524", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.139 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189177636, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 278307769, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c6524" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.279591 kubelet[1563]: E1002 19:14:47.279562 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c732b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.139 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189181227, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 278310544, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c732b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.279000 audit[1596]: NETFILTER_CFG table=mangle:11 family=10 entries=1 op=nft_register_chain pid=1596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:47.279000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcad016810 a2=0 a3=7ffcad0167fc items=0 ppid=1563 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.280083 kubelet[1563]: E1002 19:14:47.280050 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c7836", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.139 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189182518, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 278311905, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c7836" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:14:47.280000 audit[1599]: NETFILTER_CFG table=nat:12 family=10 entries=2 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:47.280000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffea7b32930 a2=0 a3=7ffea7b3291c items=0 ppid=1563 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:14:47.280000 audit[1600]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:47.280000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff1ee9c3f0 a2=0 a3=7fff1ee9c3dc items=0 ppid=1563 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:47.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:14:47.379372 kubelet[1563]: E1002 19:14:47.379353 1563 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.124.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:14:47.482159 kubelet[1563]: I1002 19:14:47.480020 1563 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.139" Oct 2 19:14:47.482159 kubelet[1563]: E1002 19:14:47.481227 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c6524", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.139 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189177636, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 479984913, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c6524" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.482443 kubelet[1563]: E1002 19:14:47.482421 1563 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.139" Oct 2 19:14:47.482681 kubelet[1563]: E1002 19:14:47.482641 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c732b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.139 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189181227, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 479988630, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c732b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.483109 kubelet[1563]: E1002 19:14:47.483081 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c7836", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.139 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189182518, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 479998313, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c7836" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.781412 kubelet[1563]: E1002 19:14:47.781109 1563 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.124.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:14:47.883565 kubelet[1563]: I1002 19:14:47.883139 1563 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.139" Oct 2 19:14:47.884050 kubelet[1563]: E1002 19:14:47.884018 1563 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.139" Oct 2 19:14:47.884294 kubelet[1563]: E1002 19:14:47.884238 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c6524", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.139 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189177636, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 883117894, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c6524" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.884793 kubelet[1563]: E1002 19:14:47.884759 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c732b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.139 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189181227, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 883120783, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c732b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:47.885281 kubelet[1563]: E1002 19:14:47.885228 1563 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.139.178a604b889c7836", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.139", UID:"10.67.124.139", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.139 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.139"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 189182518, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 14, 47, 883122005, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.139.178a604b889c7836" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:14:48.159113 kubelet[1563]: I1002 19:14:48.159078 1563 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:14:48.169444 kubelet[1563]: E1002 19:14:48.169400 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:48.523209 kubelet[1563]: E1002 19:14:48.523054 1563 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.124.139" not found Oct 2 19:14:48.584967 kubelet[1563]: E1002 19:14:48.584949 1563 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.124.139\" not found" node="10.67.124.139" Oct 2 19:14:48.684971 kubelet[1563]: I1002 19:14:48.684948 1563 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.139" Oct 2 19:14:48.687147 kubelet[1563]: I1002 19:14:48.687129 1563 kubelet_node_status.go:73] "Successfully registered node" node="10.67.124.139" Oct 2 19:14:48.794388 kubelet[1563]: I1002 19:14:48.794173 1563 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:14:48.794767 env[1145]: time="2023-10-02T19:14:48.794739520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:14:48.794993 kubelet[1563]: I1002 19:14:48.794859 1563 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:14:49.096000 audit[1345]: USER_END pid=1345 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.096000 audit[1345]: CRED_DISP pid=1345 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.097419 sudo[1345]: pam_unix(sudo:session): session closed for user root Oct 2 19:14:49.098276 sshd[1341]: pam_unix(sshd:session): session closed for user core Oct 2 19:14:49.098000 audit[1341]: USER_END pid=1341 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:49.098000 audit[1341]: CRED_DISP pid=1341 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:14:49.100344 systemd[1]: sshd@6-139.178.70.107:22-86.109.11.97:53040.service: Deactivated successfully. Oct 2 19:14:49.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.107:22-86.109.11.97:53040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.100831 systemd[1]: session-9.scope: Deactivated successfully. Oct 2 19:14:49.101294 systemd-logind[1134]: Session 9 logged out. Waiting for processes to exit. Oct 2 19:14:49.101961 systemd-logind[1134]: Removed session 9. Oct 2 19:14:49.169895 kubelet[1563]: E1002 19:14:49.169844 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:49.169895 kubelet[1563]: I1002 19:14:49.169891 1563 apiserver.go:52] "Watching apiserver" Oct 2 19:14:49.171671 kubelet[1563]: I1002 19:14:49.171652 1563 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:14:49.171754 kubelet[1563]: I1002 19:14:49.171741 1563 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:14:49.177314 systemd[1]: Created slice kubepods-besteffort-pod6a8d76e2_f6e6_48ee_b2f4_9f7dfdc62b15.slice. Oct 2 19:14:49.179348 kubelet[1563]: I1002 19:14:49.179326 1563 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Oct 2 19:14:49.183458 systemd[1]: Created slice kubepods-burstable-podc818cc1e_986d_420e_8a41_56984e15e30f.slice. Oct 2 19:14:49.188906 kubelet[1563]: I1002 19:14:49.188885 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-run\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189056 kubelet[1563]: I1002 19:14:49.189047 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-lib-modules\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189127 kubelet[1563]: I1002 19:14:49.189118 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c818cc1e-986d-420e-8a41-56984e15e30f-clustermesh-secrets\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189189 kubelet[1563]: I1002 19:14:49.189181 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15-kube-proxy\") pod \"kube-proxy-92s4f\" (UID: \"6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15\") " pod="kube-system/kube-proxy-92s4f" Oct 2 19:14:49.189258 kubelet[1563]: I1002 19:14:49.189251 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15-xtables-lock\") pod \"kube-proxy-92s4f\" (UID: \"6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15\") " pod="kube-system/kube-proxy-92s4f" Oct 2 19:14:49.189318 kubelet[1563]: I1002 19:14:49.189311 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-cgroup\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189374 kubelet[1563]: I1002 19:14:49.189367 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-etc-cni-netd\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189485 kubelet[1563]: I1002 19:14:49.189474 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-net\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189557 kubelet[1563]: I1002 19:14:49.189548 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-hubble-tls\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189623 kubelet[1563]: I1002 19:14:49.189615 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15-lib-modules\") pod \"kube-proxy-92s4f\" (UID: \"6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15\") " pod="kube-system/kube-proxy-92s4f" Oct 2 19:14:49.189684 kubelet[1563]: I1002 19:14:49.189677 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lfq\" (UniqueName: \"kubernetes.io/projected/6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15-kube-api-access-v4lfq\") pod \"kube-proxy-92s4f\" (UID: \"6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15\") " pod="kube-system/kube-proxy-92s4f" Oct 2 19:14:49.189872 kubelet[1563]: I1002 19:14:49.189863 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-bpf-maps\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189930 kubelet[1563]: I1002 19:14:49.189922 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-hostproc\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.189995 kubelet[1563]: I1002 19:14:49.189982 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-xtables-lock\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.190154 kubelet[1563]: I1002 19:14:49.190130 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-kernel\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.190231 kubelet[1563]: I1002 19:14:49.190188 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjsm\" (UniqueName: \"kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-kube-api-access-7gjsm\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.190263 kubelet[1563]: I1002 19:14:49.190238 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cni-path\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.190263 kubelet[1563]: I1002 19:14:49.190259 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-config-path\") pod \"cilium-vcp76\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " pod="kube-system/cilium-vcp76" Oct 2 19:14:49.190345 kubelet[1563]: I1002 19:14:49.190274 1563 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:14:49.487904 env[1145]: time="2023-10-02T19:14:49.487866239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92s4f,Uid:6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15,Namespace:kube-system,Attempt:0,}" Oct 2 19:14:49.493094 env[1145]: time="2023-10-02T19:14:49.492663469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vcp76,Uid:c818cc1e-986d-420e-8a41-56984e15e30f,Namespace:kube-system,Attempt:0,}" Oct 2 19:14:49.764097 update_engine[1135]: I1002 19:14:49.763753 1135 update_attempter.cc:505] Updating boot flags... Oct 2 19:14:50.130756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819974941.mount: Deactivated successfully. Oct 2 19:14:50.133410 env[1145]: time="2023-10-02T19:14:50.133379868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.134205 env[1145]: time="2023-10-02T19:14:50.134191253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.135099 env[1145]: time="2023-10-02T19:14:50.135082821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.135964 env[1145]: time="2023-10-02T19:14:50.135951951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.137071 env[1145]: time="2023-10-02T19:14:50.137059881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.137430 env[1145]: time="2023-10-02T19:14:50.137417410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.138740 env[1145]: time="2023-10-02T19:14:50.138722969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.139069 env[1145]: time="2023-10-02T19:14:50.139054131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:50.156272 env[1145]: time="2023-10-02T19:14:50.148707854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:14:50.156272 env[1145]: time="2023-10-02T19:14:50.148751153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:14:50.156272 env[1145]: time="2023-10-02T19:14:50.148758278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:14:50.156272 env[1145]: time="2023-10-02T19:14:50.148862159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad pid=1636 runtime=io.containerd.runc.v2 Oct 2 19:14:50.156982 env[1145]: time="2023-10-02T19:14:50.148728991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:14:50.156982 env[1145]: time="2023-10-02T19:14:50.148753667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:14:50.156982 env[1145]: time="2023-10-02T19:14:50.148760451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:14:50.156982 env[1145]: time="2023-10-02T19:14:50.148865053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85b49366aba0e374880dde8e3ff84fc662b8f86672bfc9965bdeb196823ace7c pid=1639 runtime=io.containerd.runc.v2 Oct 2 19:14:50.167740 systemd[1]: Started cri-containerd-a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad.scope. Oct 2 19:14:50.172285 kubelet[1563]: E1002 19:14:50.172259 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:50.175346 systemd[1]: Started cri-containerd-85b49366aba0e374880dde8e3ff84fc662b8f86672bfc9965bdeb196823ace7c.scope. Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.184000 audit: BPF prog-id=61 op=LOAD Oct 2 19:14:50.184000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.184000 audit[1658]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1636 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135653430613065396236663437643362313962326639653262396437 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1636 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135653430613065396236663437643362313962326639653262396437 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit: BPF prog-id=62 op=LOAD Oct 2 19:14:50.185000 audit[1658]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0001d9400 items=0 ppid=1636 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135653430613065396236663437643362313962326639653262396437 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.185000 audit: BPF prog-id=63 op=LOAD Oct 2 19:14:50.185000 audit[1658]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0001d9448 items=0 ppid=1636 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135653430613065396236663437643362313962326639653262396437 Oct 2 19:14:50.186000 audit: BPF prog-id=63 op=UNLOAD Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit: BPF prog-id=64 op=LOAD Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1639 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623439333636616261306533373438383064646538653366663834 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1639 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623439333636616261306533373438383064646538653366663834 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.187000 audit: BPF prog-id=65 op=LOAD Oct 2 19:14:50.187000 audit[1659]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00034e880 items=0 ppid=1639 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623439333636616261306533373438383064646538653366663834 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit: BPF prog-id=66 op=LOAD Oct 2 19:14:50.188000 audit[1659]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00034e8c8 items=0 ppid=1639 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623439333636616261306533373438383064646538653366663834 Oct 2 19:14:50.188000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:14:50.188000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { perfmon } for pid=1659 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit: BPF prog-id=62 op=UNLOAD Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.189000 audit: BPF prog-id=67 op=LOAD Oct 2 19:14:50.189000 audit[1658]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0001d9858 items=0 ppid=1636 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135653430613065396236663437643362313962326639653262396437 Oct 2 19:14:50.188000 audit[1659]: AVC avc: denied { bpf } for pid=1659 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:50.188000 audit: BPF prog-id=68 op=LOAD Oct 2 19:14:50.188000 audit[1659]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c00034ecd8 items=0 ppid=1639 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:50.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623439333636616261306533373438383064646538653366663834 Oct 2 19:14:50.198239 env[1145]: time="2023-10-02T19:14:50.198215228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vcp76,Uid:c818cc1e-986d-420e-8a41-56984e15e30f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\"" Oct 2 19:14:50.199663 env[1145]: time="2023-10-02T19:14:50.199647367Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:14:50.202861 env[1145]: time="2023-10-02T19:14:50.202834616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92s4f,Uid:6a8d76e2-f6e6-48ee-b2f4-9f7dfdc62b15,Namespace:kube-system,Attempt:0,} returns sandbox id \"85b49366aba0e374880dde8e3ff84fc662b8f86672bfc9965bdeb196823ace7c\"" Oct 2 19:14:51.172974 kubelet[1563]: E1002 19:14:51.172940 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:52.174003 kubelet[1563]: E1002 19:14:52.173975 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:53.174625 kubelet[1563]: E1002 19:14:53.174604 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:53.753879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950575419.mount: Deactivated successfully. Oct 2 19:14:54.175167 kubelet[1563]: E1002 19:14:54.175138 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:55.175844 kubelet[1563]: E1002 19:14:55.175821 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:55.666993 env[1145]: time="2023-10-02T19:14:55.666959186Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:55.667746 env[1145]: time="2023-10-02T19:14:55.667730233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:55.668493 env[1145]: time="2023-10-02T19:14:55.668478615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:55.668886 env[1145]: time="2023-10-02T19:14:55.668869605Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:14:55.669828 env[1145]: time="2023-10-02T19:14:55.669813628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\"" Oct 2 19:14:55.670796 env[1145]: time="2023-10-02T19:14:55.670780357Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:14:55.675779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905732687.mount: Deactivated successfully. Oct 2 19:14:55.678455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658300128.mount: Deactivated successfully. Oct 2 19:14:55.685617 env[1145]: time="2023-10-02T19:14:55.685597106Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\"" Oct 2 19:14:55.686192 env[1145]: time="2023-10-02T19:14:55.686175256Z" level=info msg="StartContainer for \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\"" Oct 2 19:14:55.697057 systemd[1]: Started cri-containerd-259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a.scope. Oct 2 19:14:55.705816 systemd[1]: cri-containerd-259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a.scope: Deactivated successfully. Oct 2 19:14:55.705979 systemd[1]: Stopped cri-containerd-259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a.scope. Oct 2 19:14:56.176606 kubelet[1563]: E1002 19:14:56.176577 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:56.214160 env[1145]: time="2023-10-02T19:14:56.214128117Z" level=info msg="shim disconnected" id=259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a Oct 2 19:14:56.214280 env[1145]: time="2023-10-02T19:14:56.214262873Z" level=warning msg="cleaning up after shim disconnected" id=259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a namespace=k8s.io Oct 2 19:14:56.214343 env[1145]: time="2023-10-02T19:14:56.214330067Z" level=info msg="cleaning up dead shim" Oct 2 19:14:56.220707 env[1145]: time="2023-10-02T19:14:56.220677403Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1731 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:14:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:14:56.220912 env[1145]: time="2023-10-02T19:14:56.220843629Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:14:56.221015 env[1145]: time="2023-10-02T19:14:56.220985127Z" level=error msg="Failed to pipe stdout of container \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\"" error="reading from a closed fifo" Oct 2 19:14:56.221284 env[1145]: time="2023-10-02T19:14:56.221104262Z" level=error msg="Failed to pipe stderr of container \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\"" error="reading from a closed fifo" Oct 2 19:14:56.221989 env[1145]: time="2023-10-02T19:14:56.221959068Z" level=error msg="StartContainer for \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:14:56.222224 kubelet[1563]: E1002 19:14:56.222206 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a" Oct 2 19:14:56.222320 kubelet[1563]: E1002 19:14:56.222302 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:14:56.222320 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:14:56.222320 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:14:56.222417 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7gjsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:14:56.222417 kubelet[1563]: E1002 19:14:56.222352 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:14:56.290359 env[1145]: time="2023-10-02T19:14:56.290327680Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:14:56.297185 env[1145]: time="2023-10-02T19:14:56.297151302Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\"" Oct 2 19:14:56.297846 env[1145]: time="2023-10-02T19:14:56.297825283Z" level=info msg="StartContainer for \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\"" Oct 2 19:14:56.308922 systemd[1]: Started cri-containerd-f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1.scope. Oct 2 19:14:56.315099 systemd[1]: cri-containerd-f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1.scope: Deactivated successfully. Oct 2 19:14:56.315264 systemd[1]: Stopped cri-containerd-f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1.scope. Oct 2 19:14:56.320351 env[1145]: time="2023-10-02T19:14:56.320315082Z" level=info msg="shim disconnected" id=f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1 Oct 2 19:14:56.320461 env[1145]: time="2023-10-02T19:14:56.320355716Z" level=warning msg="cleaning up after shim disconnected" id=f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1 namespace=k8s.io Oct 2 19:14:56.320461 env[1145]: time="2023-10-02T19:14:56.320364729Z" level=info msg="cleaning up dead shim" Oct 2 19:14:56.325114 env[1145]: time="2023-10-02T19:14:56.325087649Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1770 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:14:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:14:56.325348 env[1145]: time="2023-10-02T19:14:56.325314754Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Oct 2 19:14:56.328764 env[1145]: time="2023-10-02T19:14:56.328738191Z" level=error msg="Failed to pipe stdout of container \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\"" error="reading from a closed fifo" Oct 2 19:14:56.328801 env[1145]: time="2023-10-02T19:14:56.328769672Z" level=error msg="Failed to pipe stderr of container \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\"" error="reading from a closed fifo" Oct 2 19:14:56.329327 env[1145]: time="2023-10-02T19:14:56.329292548Z" level=error msg="StartContainer for \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:14:56.329462 kubelet[1563]: E1002 19:14:56.329446 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1" Oct 2 19:14:56.329551 kubelet[1563]: E1002 19:14:56.329521 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:14:56.329551 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:14:56.329551 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:14:56.329551 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7gjsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:14:56.329551 kubelet[1563]: E1002 19:14:56.329544 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:14:56.674578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a-rootfs.mount: Deactivated successfully. Oct 2 19:14:56.994258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365717599.mount: Deactivated successfully. Oct 2 19:14:57.177473 kubelet[1563]: E1002 19:14:57.177451 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:57.290367 kubelet[1563]: I1002 19:14:57.290308 1563 scope.go:115] "RemoveContainer" containerID="259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a" Oct 2 19:14:57.290628 kubelet[1563]: I1002 19:14:57.290613 1563 scope.go:115] "RemoveContainer" containerID="259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a" Oct 2 19:14:57.291315 env[1145]: time="2023-10-02T19:14:57.291292919Z" level=info msg="RemoveContainer for \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\"" Oct 2 19:14:57.292485 env[1145]: time="2023-10-02T19:14:57.292467665Z" level=info msg="RemoveContainer for \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\" returns successfully" Oct 2 19:14:57.292681 env[1145]: time="2023-10-02T19:14:57.292668819Z" level=info msg="RemoveContainer for \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\"" Oct 2 19:14:57.292756 env[1145]: time="2023-10-02T19:14:57.292745654Z" level=info msg="RemoveContainer for \"259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a\" returns successfully" Oct 2 19:14:57.293133 kubelet[1563]: E1002 19:14:57.293027 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:14:57.424973 env[1145]: time="2023-10-02T19:14:57.424849570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:57.434143 env[1145]: time="2023-10-02T19:14:57.434114078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:57.439407 env[1145]: time="2023-10-02T19:14:57.439344375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:57.443919 env[1145]: time="2023-10-02T19:14:57.443899112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8e9eff2f6d0b398f9ac5f5a15c1cb7d5f468f28d64a78d593d57f72a969a54ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:57.444014 env[1145]: time="2023-10-02T19:14:57.443990887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\" returns image reference \"sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985\"" Oct 2 19:14:57.445331 env[1145]: time="2023-10-02T19:14:57.445307323Z" level=info msg="CreateContainer within sandbox \"85b49366aba0e374880dde8e3ff84fc662b8f86672bfc9965bdeb196823ace7c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:14:57.453337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257417045.mount: Deactivated successfully. Oct 2 19:14:57.465730 env[1145]: time="2023-10-02T19:14:57.465690032Z" level=info msg="CreateContainer within sandbox \"85b49366aba0e374880dde8e3ff84fc662b8f86672bfc9965bdeb196823ace7c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7634348e5e31a462845ac2b8dab3d1e1b6abe460f6bf7c08aaa9c8ea59524bf\"" Oct 2 19:14:57.466223 env[1145]: time="2023-10-02T19:14:57.466203233Z" level=info msg="StartContainer for \"d7634348e5e31a462845ac2b8dab3d1e1b6abe460f6bf7c08aaa9c8ea59524bf\"" Oct 2 19:14:57.477067 systemd[1]: Started cri-containerd-d7634348e5e31a462845ac2b8dab3d1e1b6abe460f6bf7c08aaa9c8ea59524bf.scope. Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.491065 kernel: kauditd_printk_skb: 530 callbacks suppressed Oct 2 19:14:57.491119 kernel: audit: type=1400 audit(1696274097.487:582): avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.491141 kernel: audit: type=1300 audit(1696274097.487:582): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1639 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.487000 audit[1791]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1639 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.494645 kernel: audit: type=1327 audit(1696274097.487:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363334333438653565333161343632383435616332623864616233 Oct 2 19:14:57.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363334333438653565333161343632383435616332623864616233 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.500240 kernel: audit: type=1400 audit(1696274097.487:583): avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.500270 kernel: audit: type=1400 audit(1696274097.487:583): avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.504914 kernel: audit: type=1400 audit(1696274097.487:583): avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.507526 kernel: audit: type=1400 audit(1696274097.487:583): avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.512774 kernel: audit: type=1400 audit(1696274097.487:583): avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.512800 kernel: audit: type=1400 audit(1696274097.487:583): avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.515327 kernel: audit: type=1400 audit(1696274097.487:583): avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.487000 audit: BPF prog-id=69 op=LOAD Oct 2 19:14:57.487000 audit[1791]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0001f9530 items=0 ppid=1639 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363334333438653565333161343632383435616332623864616233 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.490000 audit: BPF prog-id=70 op=LOAD Oct 2 19:14:57.490000 audit[1791]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0001f9578 items=0 ppid=1639 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363334333438653565333161343632383435616332623864616233 Oct 2 19:14:57.497000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:14:57.497000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { perfmon } for pid=1791 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit[1791]: AVC avc: denied { bpf } for pid=1791 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:57.497000 audit: BPF prog-id=71 op=LOAD Oct 2 19:14:57.497000 audit[1791]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0001f9608 items=0 ppid=1639 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363334333438653565333161343632383435616332623864616233 Oct 2 19:14:57.535233 env[1145]: time="2023-10-02T19:14:57.535207219Z" level=info msg="StartContainer for \"d7634348e5e31a462845ac2b8dab3d1e1b6abe460f6bf7c08aaa9c8ea59524bf\" returns successfully" Oct 2 19:14:57.553000 audit[1841]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.553000 audit[1841]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4d8302d0 a2=0 a3=7fff4d8302bc items=0 ppid=1803 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:14:57.554000 audit[1842]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.554000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd57c22850 a2=0 a3=7ffd57c2283c items=0 ppid=1803 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:14:57.555000 audit[1843]: NETFILTER_CFG table=mangle:16 family=10 entries=1 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.555000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca0497330 a2=0 a3=7ffca049731c items=0 ppid=1803 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:14:57.556000 audit[1844]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.556000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0443d1c0 a2=0 a3=7ffc0443d1ac items=0 ppid=1803 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:14:57.558000 audit[1845]: NETFILTER_CFG table=nat:18 family=10 entries=1 op=nft_register_chain pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.558000 audit[1845]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda567e9f0 a2=0 a3=7ffda567e9dc items=0 ppid=1803 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:14:57.558000 audit[1846]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.558000 audit[1846]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0bed6000 a2=0 a3=7ffe0bed5fec items=0 ppid=1803 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:14:57.657000 audit[1847]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.657000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd143a9ed0 a2=0 a3=7ffd143a9ebc items=0 ppid=1803 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.657000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:14:57.659000 audit[1849]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.659000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd1e74cf30 a2=0 a3=7ffd1e74cf1c items=0 ppid=1803 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:14:57.662000 audit[1852]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.662000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd34b7c770 a2=0 a3=7ffd34b7c75c items=0 ppid=1803 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:14:57.663000 audit[1853]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.663000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2d271e40 a2=0 a3=7ffd2d271e2c items=0 ppid=1803 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:14:57.665000 audit[1855]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.665000 audit[1855]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc052fa80 a2=0 a3=7ffcc052fa6c items=0 ppid=1803 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:14:57.666000 audit[1856]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.666000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc668d68f0 a2=0 a3=7ffc668d68dc items=0 ppid=1803 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:14:57.668000 audit[1858]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.668000 audit[1858]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe53743310 a2=0 a3=7ffe537432fc items=0 ppid=1803 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:14:57.671000 audit[1861]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.671000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff74f3d9c0 a2=0 a3=7fff74f3d9ac items=0 ppid=1803 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:14:57.671000 audit[1862]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.671000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd15db14f0 a2=0 a3=7ffd15db14dc items=0 ppid=1803 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:14:57.675000 audit[1864]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.675000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd43763b60 a2=0 a3=7ffd43763b4c items=0 ppid=1803 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:14:57.676000 audit[1865]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.676000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea851d190 a2=0 a3=7ffea851d17c items=0 ppid=1803 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:14:57.678000 audit[1867]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.678000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2bb1f0b0 a2=0 a3=7fff2bb1f09c items=0 ppid=1803 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:14:57.680000 audit[1870]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.680000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8c88c890 a2=0 a3=7ffe8c88c87c items=0 ppid=1803 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:14:57.682000 audit[1873]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.682000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffedb541850 a2=0 a3=7ffedb54183c items=0 ppid=1803 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:14:57.683000 audit[1874]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.683000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd37cea70 a2=0 a3=7ffcd37cea5c items=0 ppid=1803 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:14:57.684000 audit[1876]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.684000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd38ee9fe0 a2=0 a3=7ffd38ee9fcc items=0 ppid=1803 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:14:57.719000 audit[1881]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.719000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff57b536d0 a2=0 a3=7fff57b536bc items=0 ppid=1803 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:14:57.723000 audit[1886]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.723000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd3bbd0e0 a2=0 a3=7ffdd3bbd0cc items=0 ppid=1803 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:14:57.724000 audit[1888]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:14:57.724000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd71f6b240 a2=0 a3=7ffd71f6b22c items=0 ppid=1803 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:14:57.731000 audit[1890]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:14:57.731000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffde75225b0 a2=0 a3=7ffde752259c items=0 ppid=1803 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:14:57.747000 audit[1890]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:14:57.747000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffde75225b0 a2=0 a3=7ffde752259c items=0 ppid=1803 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:14:57.748000 audit[1896]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.748000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffced007d0 a2=0 a3=7fffced007bc items=0 ppid=1803 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:14:57.750000 audit[1898]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.750000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcc23a1e40 a2=0 a3=7ffcc23a1e2c items=0 ppid=1803 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.750000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:14:57.753000 audit[1901]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.753000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc97d22f90 a2=0 a3=7ffc97d22f7c items=0 ppid=1803 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:14:57.753000 audit[1902]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.753000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd63232f30 a2=0 a3=7ffd63232f1c items=0 ppid=1803 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:14:57.755000 audit[1904]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.755000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd485490c0 a2=0 a3=7ffd485490ac items=0 ppid=1803 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:14:57.756000 audit[1905]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.756000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0e7ab740 a2=0 a3=7ffe0e7ab72c items=0 ppid=1803 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:14:57.757000 audit[1907]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.757000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd13fed4e0 a2=0 a3=7ffd13fed4cc items=0 ppid=1803 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:14:57.759000 audit[1910]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.759000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc5a332be0 a2=0 a3=7ffc5a332bcc items=0 ppid=1803 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:14:57.760000 audit[1911]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.760000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2ff8d340 a2=0 a3=7ffd2ff8d32c items=0 ppid=1803 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:14:57.763000 audit[1913]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.763000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc437f47f0 a2=0 a3=7ffc437f47dc items=0 ppid=1803 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:14:57.764000 audit[1914]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.764000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe4723340 a2=0 a3=7fffe472332c items=0 ppid=1803 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:14:57.765000 audit[1916]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.765000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffac614f50 a2=0 a3=7fffac614f3c items=0 ppid=1803 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:14:57.767000 audit[1919]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.767000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfbeba5f0 a2=0 a3=7ffcfbeba5dc items=0 ppid=1803 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:14:57.769000 audit[1922]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.769000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffda389b90 a2=0 a3=7fffda389b7c items=0 ppid=1803 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:14:57.770000 audit[1923]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.770000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff88a5ff50 a2=0 a3=7fff88a5ff3c items=0 ppid=1803 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:14:57.771000 audit[1925]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.771000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdc9223e40 a2=0 a3=7ffdc9223e2c items=0 ppid=1803 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:14:57.773000 audit[1928]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.773000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffde763a950 a2=0 a3=7ffde763a93c items=0 ppid=1803 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:14:57.774000 audit[1929]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.774000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed3b5f580 a2=0 a3=7ffed3b5f56c items=0 ppid=1803 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:14:57.776000 audit[1931]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_rule pid=1931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.776000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff17733490 a2=0 a3=7fff1773347c items=0 ppid=1803 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:14:57.778000 audit[1934]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_rule pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.778000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6b52a2d0 a2=0 a3=7ffe6b52a2bc items=0 ppid=1803 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:14:57.778000 audit[1935]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.778000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff366edd60 a2=0 a3=7fff366edd4c items=0 ppid=1803 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:14:57.780000 audit[1937]: NETFILTER_CFG table=nat:62 family=10 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:14:57.780000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff7996cd40 a2=0 a3=7fff7996cd2c items=0 ppid=1803 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:14:57.781000 audit[1939]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:14:57.781000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffee842f630 a2=0 a3=7ffee842f61c items=0 ppid=1803 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.781000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:14:57.782000 audit[1939]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:14:57.782000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffee842f630 a2=0 a3=7ffee842f61c items=0 ppid=1803 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:57.782000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:14:58.178190 kubelet[1563]: E1002 19:14:58.178145 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:58.292194 kubelet[1563]: E1002 19:14:58.292174 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:14:58.310814 kubelet[1563]: I1002 19:14:58.310744 1563 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-92s4f" podStartSLOduration=3.069990018 podCreationTimestamp="2023-10-02 19:14:48 +0000 UTC" firstStartedPulling="2023-10-02 19:14:50.203407273 +0000 UTC m=+3.309033680" lastFinishedPulling="2023-10-02 19:14:57.444126002 +0000 UTC m=+10.549752416" observedRunningTime="2023-10-02 19:14:58.310437516 +0000 UTC m=+11.416063934" watchObservedRunningTime="2023-10-02 19:14:58.310708754 +0000 UTC m=+11.416335173" Oct 2 19:14:59.179182 kubelet[1563]: E1002 19:14:59.179157 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:59.319277 kubelet[1563]: W1002 19:14:59.319237 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc818cc1e_986d_420e_8a41_56984e15e30f.slice/cri-containerd-259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a.scope WatchSource:0}: container "259dfff001204c2630822a56815d9edd9977060b597057424d4f8db23e6cf78a" in namespace "k8s.io": not found Oct 2 19:15:00.179245 kubelet[1563]: E1002 19:15:00.179223 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:01.180263 kubelet[1563]: E1002 19:15:01.180231 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:02.180862 kubelet[1563]: E1002 19:15:02.180834 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:02.424869 kubelet[1563]: W1002 19:15:02.424833 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc818cc1e_986d_420e_8a41_56984e15e30f.slice/cri-containerd-f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1.scope WatchSource:0}: task f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1 not found: not found Oct 2 19:15:02.429004 kubelet[1563]: E1002 19:15:02.428981 1563 cadvisor_stats_provider.go:442] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a8d76e2_f6e6_48ee_b2f4_9f7dfdc62b15.slice/cri-containerd-d7634348e5e31a462845ac2b8dab3d1e1b6abe460f6bf7c08aaa9c8ea59524bf.scope\": RecentStats: unable to find data in memory cache]" Oct 2 19:15:03.181777 kubelet[1563]: E1002 19:15:03.181743 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:04.182085 kubelet[1563]: E1002 19:15:04.182055 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:05.182880 kubelet[1563]: E1002 19:15:05.182856 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:06.183807 kubelet[1563]: E1002 19:15:06.183774 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:07.168886 kubelet[1563]: E1002 19:15:07.168867 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:07.184124 kubelet[1563]: E1002 19:15:07.184109 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:08.185476 kubelet[1563]: E1002 19:15:08.185446 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:09.186540 kubelet[1563]: E1002 19:15:09.186514 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:09.279384 env[1145]: time="2023-10-02T19:15:09.279282636Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:15:09.284202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238236102.mount: Deactivated successfully. Oct 2 19:15:09.287092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867283440.mount: Deactivated successfully. Oct 2 19:15:09.288997 env[1145]: time="2023-10-02T19:15:09.288978192Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\"" Oct 2 19:15:09.289404 env[1145]: time="2023-10-02T19:15:09.289383403Z" level=info msg="StartContainer for \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\"" Oct 2 19:15:09.300582 systemd[1]: Started cri-containerd-72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7.scope. Oct 2 19:15:09.309187 systemd[1]: cri-containerd-72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7.scope: Deactivated successfully. Oct 2 19:15:09.309337 systemd[1]: Stopped cri-containerd-72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7.scope. Oct 2 19:15:09.620353 env[1145]: time="2023-10-02T19:15:09.619970385Z" level=info msg="shim disconnected" id=72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7 Oct 2 19:15:09.620353 env[1145]: time="2023-10-02T19:15:09.620008078Z" level=warning msg="cleaning up after shim disconnected" id=72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7 namespace=k8s.io Oct 2 19:15:09.620353 env[1145]: time="2023-10-02T19:15:09.620016275Z" level=info msg="cleaning up dead shim" Oct 2 19:15:09.625433 env[1145]: time="2023-10-02T19:15:09.625392320Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1965 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:09.625773 env[1145]: time="2023-10-02T19:15:09.625708580Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:15:09.629778 env[1145]: time="2023-10-02T19:15:09.625864176Z" level=error msg="Failed to pipe stdout of container \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\"" error="reading from a closed fifo" Oct 2 19:15:09.629835 env[1145]: time="2023-10-02T19:15:09.629739713Z" level=error msg="Failed to pipe stderr of container \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\"" error="reading from a closed fifo" Oct 2 19:15:09.630474 env[1145]: time="2023-10-02T19:15:09.630437264Z" level=error msg="StartContainer for \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:09.630739 kubelet[1563]: E1002 19:15:09.630678 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7" Oct 2 19:15:09.630908 kubelet[1563]: E1002 19:15:09.630858 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:09.630908 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:09.630908 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:15:09.630908 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7gjsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:09.630908 kubelet[1563]: E1002 19:15:09.630890 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:15:10.186969 kubelet[1563]: E1002 19:15:10.186925 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:10.283240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7-rootfs.mount: Deactivated successfully. Oct 2 19:15:10.312941 kubelet[1563]: I1002 19:15:10.312920 1563 scope.go:115] "RemoveContainer" containerID="f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1" Oct 2 19:15:10.313161 kubelet[1563]: I1002 19:15:10.313146 1563 scope.go:115] "RemoveContainer" containerID="f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1" Oct 2 19:15:10.313956 env[1145]: time="2023-10-02T19:15:10.313936211Z" level=info msg="RemoveContainer for \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\"" Oct 2 19:15:10.314511 env[1145]: time="2023-10-02T19:15:10.314480028Z" level=info msg="RemoveContainer for \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\"" Oct 2 19:15:10.314702 env[1145]: time="2023-10-02T19:15:10.314631627Z" level=error msg="RemoveContainer for \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\" failed" error="rpc error: code = NotFound desc = get container info: container \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\" in namespace \"k8s.io\": not found" Oct 2 19:15:10.315088 kubelet[1563]: E1002 19:15:10.315073 1563 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\" in namespace \"k8s.io\": not found" containerID="f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1" Oct 2 19:15:10.315140 kubelet[1563]: E1002 19:15:10.315112 1563 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1" in namespace "k8s.io": not found; Skipping pod "cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)" Oct 2 19:15:10.315279 kubelet[1563]: E1002 19:15:10.315267 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:15:10.316159 env[1145]: time="2023-10-02T19:15:10.316142184Z" level=info msg="RemoveContainer for \"f0c760731614ae3dea195c329b5f41e641b963eb0364b8f36d56d41758bd6ec1\" returns successfully" Oct 2 19:15:11.187890 kubelet[1563]: E1002 19:15:11.187844 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:12.188810 kubelet[1563]: E1002 19:15:12.188782 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:12.724558 kubelet[1563]: W1002 19:15:12.724525 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc818cc1e_986d_420e_8a41_56984e15e30f.slice/cri-containerd-72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7.scope WatchSource:0}: task 72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7 not found: not found Oct 2 19:15:13.188899 kubelet[1563]: E1002 19:15:13.188867 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:14.189934 kubelet[1563]: E1002 19:15:14.189912 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:15.190431 kubelet[1563]: E1002 19:15:15.190407 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:16.191182 kubelet[1563]: E1002 19:15:16.191152 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:17.192591 kubelet[1563]: E1002 19:15:17.192569 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:18.193126 kubelet[1563]: E1002 19:15:18.193100 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:19.194076 kubelet[1563]: E1002 19:15:19.194050 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:20.194947 kubelet[1563]: E1002 19:15:20.194914 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:21.195363 kubelet[1563]: E1002 19:15:21.195338 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:22.195862 kubelet[1563]: E1002 19:15:22.195829 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.196063 kubelet[1563]: E1002 19:15:23.196040 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.278067 kubelet[1563]: E1002 19:15:23.278048 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:15:24.197296 kubelet[1563]: E1002 19:15:24.197258 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:25.197643 kubelet[1563]: E1002 19:15:25.197618 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:26.197963 kubelet[1563]: E1002 19:15:26.197937 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:27.169456 kubelet[1563]: E1002 19:15:27.169431 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:27.198902 kubelet[1563]: E1002 19:15:27.198879 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:28.199485 kubelet[1563]: E1002 19:15:28.199449 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:29.199932 kubelet[1563]: E1002 19:15:29.199905 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:30.200335 kubelet[1563]: E1002 19:15:30.200310 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:31.200969 kubelet[1563]: E1002 19:15:31.200936 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:32.201457 kubelet[1563]: E1002 19:15:32.201433 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:33.201656 kubelet[1563]: E1002 19:15:33.201511 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:34.202640 kubelet[1563]: E1002 19:15:34.202604 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:35.203030 kubelet[1563]: E1002 19:15:35.203006 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:36.203762 kubelet[1563]: E1002 19:15:36.203730 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:37.204492 kubelet[1563]: E1002 19:15:37.204461 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:38.204846 kubelet[1563]: E1002 19:15:38.204824 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:38.280097 env[1145]: time="2023-10-02T19:15:38.279851562Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:15:38.286018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870769968.mount: Deactivated successfully. Oct 2 19:15:38.288811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774763854.mount: Deactivated successfully. Oct 2 19:15:38.290454 env[1145]: time="2023-10-02T19:15:38.290430025Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\"" Oct 2 19:15:38.291008 env[1145]: time="2023-10-02T19:15:38.290992998Z" level=info msg="StartContainer for \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\"" Oct 2 19:15:38.303126 systemd[1]: Started cri-containerd-244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef.scope. Oct 2 19:15:38.310829 systemd[1]: cri-containerd-244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef.scope: Deactivated successfully. Oct 2 19:15:38.310982 systemd[1]: Stopped cri-containerd-244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef.scope. Oct 2 19:15:38.315447 env[1145]: time="2023-10-02T19:15:38.315424010Z" level=info msg="shim disconnected" id=244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef Oct 2 19:15:38.315569 env[1145]: time="2023-10-02T19:15:38.315559034Z" level=warning msg="cleaning up after shim disconnected" id=244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef namespace=k8s.io Oct 2 19:15:38.315623 env[1145]: time="2023-10-02T19:15:38.315614601Z" level=info msg="cleaning up dead shim" Oct 2 19:15:38.320013 env[1145]: time="2023-10-02T19:15:38.319994749Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2004 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:38.320198 env[1145]: time="2023-10-02T19:15:38.320172022Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:15:38.320875 env[1145]: time="2023-10-02T19:15:38.320326175Z" level=error msg="Failed to pipe stdout of container \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\"" error="reading from a closed fifo" Oct 2 19:15:38.320970 env[1145]: time="2023-10-02T19:15:38.320739595Z" level=error msg="Failed to pipe stderr of container \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\"" error="reading from a closed fifo" Oct 2 19:15:38.321339 env[1145]: time="2023-10-02T19:15:38.321316215Z" level=error msg="StartContainer for \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:38.321889 kubelet[1563]: E1002 19:15:38.321498 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef" Oct 2 19:15:38.321889 kubelet[1563]: E1002 19:15:38.321572 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:38.321889 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:38.321889 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:15:38.321889 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7gjsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:38.321889 kubelet[1563]: E1002 19:15:38.321604 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:15:38.348437 kubelet[1563]: I1002 19:15:38.348414 1563 scope.go:115] "RemoveContainer" containerID="72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7" Oct 2 19:15:38.348664 kubelet[1563]: I1002 19:15:38.348650 1563 scope.go:115] "RemoveContainer" containerID="72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7" Oct 2 19:15:38.349528 env[1145]: time="2023-10-02T19:15:38.349502829Z" level=info msg="RemoveContainer for \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\"" Oct 2 19:15:38.349994 env[1145]: time="2023-10-02T19:15:38.349973698Z" level=info msg="RemoveContainer for \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\"" Oct 2 19:15:38.350055 env[1145]: time="2023-10-02T19:15:38.350029799Z" level=error msg="RemoveContainer for \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\" failed" error="failed to set removing state for container \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\": container is already in removing state" Oct 2 19:15:38.350201 kubelet[1563]: E1002 19:15:38.350185 1563 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\": container is already in removing state" containerID="72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7" Oct 2 19:15:38.350247 kubelet[1563]: E1002 19:15:38.350211 1563 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7": container is already in removing state; Skipping pod "cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)" Oct 2 19:15:38.350395 kubelet[1563]: E1002 19:15:38.350380 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:15:38.352295 env[1145]: time="2023-10-02T19:15:38.352267607Z" level=info msg="RemoveContainer for \"72823249d2eec643efdb8b672669205a4d575b3470d616683f65e3ec206500b7\" returns successfully" Oct 2 19:15:39.205933 kubelet[1563]: E1002 19:15:39.205910 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:39.284702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef-rootfs.mount: Deactivated successfully. Oct 2 19:15:40.206765 kubelet[1563]: E1002 19:15:40.206742 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:41.207242 kubelet[1563]: E1002 19:15:41.207219 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:41.420017 kubelet[1563]: W1002 19:15:41.419996 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc818cc1e_986d_420e_8a41_56984e15e30f.slice/cri-containerd-244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef.scope WatchSource:0}: task 244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef not found: not found Oct 2 19:15:42.208024 kubelet[1563]: E1002 19:15:42.208000 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:43.208548 kubelet[1563]: E1002 19:15:43.208520 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:44.208939 kubelet[1563]: E1002 19:15:44.208909 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:45.209283 kubelet[1563]: E1002 19:15:45.209248 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:46.210567 kubelet[1563]: E1002 19:15:46.210545 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:47.168621 kubelet[1563]: E1002 19:15:47.168578 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:47.211903 kubelet[1563]: E1002 19:15:47.211875 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:48.212928 kubelet[1563]: E1002 19:15:48.212909 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:49.213876 kubelet[1563]: E1002 19:15:49.213841 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:50.215033 kubelet[1563]: E1002 19:15:50.214999 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:51.215430 kubelet[1563]: E1002 19:15:51.215399 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:52.216112 kubelet[1563]: E1002 19:15:52.216084 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:52.278184 kubelet[1563]: E1002 19:15:52.278163 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:15:53.216412 kubelet[1563]: E1002 19:15:53.216386 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:54.217446 kubelet[1563]: E1002 19:15:54.217387 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:55.217964 kubelet[1563]: E1002 19:15:55.217943 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:56.218611 kubelet[1563]: E1002 19:15:56.218587 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:57.219426 kubelet[1563]: E1002 19:15:57.219401 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:58.219841 kubelet[1563]: E1002 19:15:58.219813 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:59.220123 kubelet[1563]: E1002 19:15:59.220100 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:00.220989 kubelet[1563]: E1002 19:16:00.220951 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:01.221139 kubelet[1563]: E1002 19:16:01.221105 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:02.221282 kubelet[1563]: E1002 19:16:02.221258 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:03.222427 kubelet[1563]: E1002 19:16:03.222402 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:04.222939 kubelet[1563]: E1002 19:16:04.222922 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:05.223795 kubelet[1563]: E1002 19:16:05.223767 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:06.224465 kubelet[1563]: E1002 19:16:06.224441 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:07.168532 kubelet[1563]: E1002 19:16:07.168509 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:07.224900 kubelet[1563]: E1002 19:16:07.224866 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:07.278302 kubelet[1563]: E1002 19:16:07.277949 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:16:08.225903 kubelet[1563]: E1002 19:16:08.225877 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:09.226772 kubelet[1563]: E1002 19:16:09.226744 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:10.227918 kubelet[1563]: E1002 19:16:10.227886 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:11.228732 kubelet[1563]: E1002 19:16:11.228697 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:12.229980 kubelet[1563]: E1002 19:16:12.229961 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:13.230333 kubelet[1563]: E1002 19:16:13.230308 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:14.231404 kubelet[1563]: E1002 19:16:14.231379 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:15.232645 kubelet[1563]: E1002 19:16:15.232617 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:16.232975 kubelet[1563]: E1002 19:16:16.232949 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:17.233346 kubelet[1563]: E1002 19:16:17.233316 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:18.233942 kubelet[1563]: E1002 19:16:18.233919 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:18.278685 kubelet[1563]: E1002 19:16:18.278669 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:16:19.234879 kubelet[1563]: E1002 19:16:19.234841 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:20.235790 kubelet[1563]: E1002 19:16:20.235756 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:21.236785 kubelet[1563]: E1002 19:16:21.236760 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:22.237305 kubelet[1563]: E1002 19:16:22.237259 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:23.237832 kubelet[1563]: E1002 19:16:23.237804 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:24.238965 kubelet[1563]: E1002 19:16:24.238933 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:25.239287 kubelet[1563]: E1002 19:16:25.239262 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:26.239839 kubelet[1563]: E1002 19:16:26.239816 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:27.169435 kubelet[1563]: E1002 19:16:27.169406 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:27.240613 kubelet[1563]: E1002 19:16:27.240571 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:28.241234 kubelet[1563]: E1002 19:16:28.241210 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:29.241943 kubelet[1563]: E1002 19:16:29.241908 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:30.242303 kubelet[1563]: E1002 19:16:30.242198 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:31.242895 kubelet[1563]: E1002 19:16:31.242867 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:32.244258 kubelet[1563]: E1002 19:16:32.244242 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:32.280578 env[1145]: time="2023-10-02T19:16:32.280526355Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:16:32.292442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520743285.mount: Deactivated successfully. Oct 2 19:16:32.299494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427401950.mount: Deactivated successfully. Oct 2 19:16:32.300330 env[1145]: time="2023-10-02T19:16:32.300308306Z" level=info msg="CreateContainer within sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\"" Oct 2 19:16:32.300874 env[1145]: time="2023-10-02T19:16:32.300854076Z" level=info msg="StartContainer for \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\"" Oct 2 19:16:32.313917 systemd[1]: Started cri-containerd-9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe.scope. Oct 2 19:16:32.322955 systemd[1]: cri-containerd-9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe.scope: Deactivated successfully. Oct 2 19:16:32.323117 systemd[1]: Stopped cri-containerd-9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe.scope. Oct 2 19:16:32.328308 env[1145]: time="2023-10-02T19:16:32.328281331Z" level=info msg="shim disconnected" id=9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe Oct 2 19:16:32.328473 env[1145]: time="2023-10-02T19:16:32.328460474Z" level=warning msg="cleaning up after shim disconnected" id=9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe namespace=k8s.io Oct 2 19:16:32.328557 env[1145]: time="2023-10-02T19:16:32.328544061Z" level=info msg="cleaning up dead shim" Oct 2 19:16:32.334572 env[1145]: time="2023-10-02T19:16:32.334530086Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:16:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2049 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:16:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:16:32.334750 env[1145]: time="2023-10-02T19:16:32.334698636Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:16:32.335771 env[1145]: time="2023-10-02T19:16:32.335747062Z" level=error msg="Failed to pipe stderr of container \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\"" error="reading from a closed fifo" Oct 2 19:16:32.335836 env[1145]: time="2023-10-02T19:16:32.335750911Z" level=error msg="Failed to pipe stdout of container \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\"" error="reading from a closed fifo" Oct 2 19:16:32.336286 env[1145]: time="2023-10-02T19:16:32.336264792Z" level=error msg="StartContainer for \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:16:32.336692 kubelet[1563]: E1002 19:16:32.336407 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe" Oct 2 19:16:32.336692 kubelet[1563]: E1002 19:16:32.336474 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:16:32.336692 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:16:32.336692 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:16:32.336692 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7gjsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:16:32.336692 kubelet[1563]: E1002 19:16:32.336499 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:16:32.413781 kubelet[1563]: I1002 19:16:32.413347 1563 scope.go:115] "RemoveContainer" containerID="244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef" Oct 2 19:16:32.413781 kubelet[1563]: I1002 19:16:32.413621 1563 scope.go:115] "RemoveContainer" containerID="244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef" Oct 2 19:16:32.414967 env[1145]: time="2023-10-02T19:16:32.414845947Z" level=info msg="RemoveContainer for \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\"" Oct 2 19:16:32.415139 env[1145]: time="2023-10-02T19:16:32.415122045Z" level=info msg="RemoveContainer for \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\"" Oct 2 19:16:32.415328 env[1145]: time="2023-10-02T19:16:32.415297278Z" level=error msg="RemoveContainer for \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\" failed" error="failed to set removing state for container \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\": container is already in removing state" Oct 2 19:16:32.415899 kubelet[1563]: E1002 19:16:32.415471 1563 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\": container is already in removing state" containerID="244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef" Oct 2 19:16:32.415899 kubelet[1563]: E1002 19:16:32.415503 1563 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef": container is already in removing state; Skipping pod "cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)" Oct 2 19:16:32.415899 kubelet[1563]: E1002 19:16:32.415758 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:16:32.422580 env[1145]: time="2023-10-02T19:16:32.422553207Z" level=info msg="RemoveContainer for \"244aa7ad945a38aa3d1e855c48921cc1c56edc3d1b3e5692250132b2844eebef\" returns successfully" Oct 2 19:16:33.245514 kubelet[1563]: E1002 19:16:33.245483 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:33.285872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe-rootfs.mount: Deactivated successfully. Oct 2 19:16:34.246583 kubelet[1563]: E1002 19:16:34.246554 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:35.246786 kubelet[1563]: E1002 19:16:35.246761 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:35.432276 kubelet[1563]: W1002 19:16:35.432212 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc818cc1e_986d_420e_8a41_56984e15e30f.slice/cri-containerd-9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe.scope WatchSource:0}: task 9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe not found: not found Oct 2 19:16:36.247501 kubelet[1563]: E1002 19:16:36.247469 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:37.248832 kubelet[1563]: E1002 19:16:37.248796 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:38.249159 kubelet[1563]: E1002 19:16:38.249141 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:39.250497 kubelet[1563]: E1002 19:16:39.250470 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:40.251430 kubelet[1563]: E1002 19:16:40.251404 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:41.252417 kubelet[1563]: E1002 19:16:41.252389 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:42.252737 kubelet[1563]: E1002 19:16:42.252707 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:43.253743 kubelet[1563]: E1002 19:16:43.253675 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:44.254391 kubelet[1563]: E1002 19:16:44.254352 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:44.278106 kubelet[1563]: E1002 19:16:44.278067 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:16:45.255294 kubelet[1563]: E1002 19:16:45.255242 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:46.255958 kubelet[1563]: E1002 19:16:46.255938 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:47.169001 kubelet[1563]: E1002 19:16:47.168971 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:47.256768 kubelet[1563]: E1002 19:16:47.256744 1563 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:16:47.257103 kubelet[1563]: E1002 19:16:47.257090 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:48.257862 kubelet[1563]: E1002 19:16:48.257826 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:49.258786 kubelet[1563]: E1002 19:16:49.258755 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:50.259970 kubelet[1563]: E1002 19:16:50.259950 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:51.260595 kubelet[1563]: E1002 19:16:51.260563 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:51.770546 update_engine[1135]: I1002 19:16:51.770373 1135 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:16:51.770546 update_engine[1135]: I1002 19:16:51.770408 1135 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:16:51.780217 update_engine[1135]: I1002 19:16:51.780104 1135 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:16:51.787435 update_engine[1135]: I1002 19:16:51.787393 1135 omaha_request_params.cc:62] Current group set to lts Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.787569 1135 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.787575 1135 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.787585 1135 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.787613 1135 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.787660 1135 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.787664 1135 omaha_request_action.cc:269] Request: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.787670 1135 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.788483 1135 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:16:51.788641 update_engine[1135]: I1002 19:16:51.788625 1135 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:16:51.788963 locksmithd[1185]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:16:52.246402 kubelet[1563]: E1002 19:16:52.246385 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:52.261572 kubelet[1563]: E1002 19:16:52.261549 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:53.000862 update_engine[1135]: I1002 19:16:53.000833 1135 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:16:53.001255 update_engine[1135]: I1002 19:16:53.001246 1135 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:16:53.001394 update_engine[1135]: I1002 19:16:53.001386 1135 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:16:53.262263 kubelet[1563]: E1002 19:16:53.261970 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:53.351295 update_engine[1135]: I1002 19:16:53.351270 1135 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:16:53.352228 update_engine[1135]: I1002 19:16:53.352216 1135 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:16:53.352282 update_engine[1135]: I1002 19:16:53.352272 1135 omaha_request_action.cc:619] Omaha request response: Oct 2 19:16:53.352282 update_engine[1135]: Oct 2 19:16:53.359049 update_engine[1135]: I1002 19:16:53.359020 1135 omaha_request_action.cc:409] No update. Oct 2 19:16:53.359175 update_engine[1135]: I1002 19:16:53.359164 1135 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:16:53.359215 update_engine[1135]: I1002 19:16:53.359207 1135 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:16:53.359256 update_engine[1135]: I1002 19:16:53.359248 1135 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:16:53.359295 update_engine[1135]: I1002 19:16:53.359288 1135 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:16:53.359336 update_engine[1135]: I1002 19:16:53.359328 1135 update_attempter.cc:302] Processing Done. Oct 2 19:16:53.359391 update_engine[1135]: I1002 19:16:53.359383 1135 update_attempter.cc:338] No update. Oct 2 19:16:53.359450 update_engine[1135]: I1002 19:16:53.359437 1135 update_check_scheduler.cc:74] Next update check in 47m24s Oct 2 19:16:53.359748 locksmithd[1185]: LastCheckedTime=1696274213 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:16:54.262503 kubelet[1563]: E1002 19:16:54.262465 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:55.262785 kubelet[1563]: E1002 19:16:55.262743 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:56.263613 kubelet[1563]: E1002 19:16:56.263575 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:57.247617 kubelet[1563]: E1002 19:16:57.247589 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:57.264950 kubelet[1563]: E1002 19:16:57.264928 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:58.265641 kubelet[1563]: E1002 19:16:58.265607 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:58.278471 kubelet[1563]: E1002 19:16:58.278456 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:16:59.266007 kubelet[1563]: E1002 19:16:59.265980 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:00.266946 kubelet[1563]: E1002 19:17:00.266920 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:01.267426 kubelet[1563]: E1002 19:17:01.267399 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:02.248983 kubelet[1563]: E1002 19:17:02.248966 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:02.268585 kubelet[1563]: E1002 19:17:02.268558 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:03.269780 kubelet[1563]: E1002 19:17:03.269707 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:04.270785 kubelet[1563]: E1002 19:17:04.270757 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:05.272073 kubelet[1563]: E1002 19:17:05.272039 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:06.272779 kubelet[1563]: E1002 19:17:06.272751 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:07.169250 kubelet[1563]: E1002 19:17:07.169224 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:07.250093 kubelet[1563]: E1002 19:17:07.250071 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:07.273334 kubelet[1563]: E1002 19:17:07.273299 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:08.274439 kubelet[1563]: E1002 19:17:08.274404 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:09.274669 kubelet[1563]: E1002 19:17:09.274636 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:10.274813 kubelet[1563]: E1002 19:17:10.274788 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:11.275497 kubelet[1563]: E1002 19:17:11.275468 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:12.251091 kubelet[1563]: E1002 19:17:12.251074 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:12.276381 kubelet[1563]: E1002 19:17:12.276358 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:13.277437 kubelet[1563]: E1002 19:17:13.277412 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:13.278044 kubelet[1563]: E1002 19:17:13.278033 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:17:14.277808 kubelet[1563]: E1002 19:17:14.277775 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:15.277962 kubelet[1563]: E1002 19:17:15.277928 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:16.278168 kubelet[1563]: E1002 19:17:16.278139 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:17.252481 kubelet[1563]: E1002 19:17:17.252465 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:17.278488 kubelet[1563]: E1002 19:17:17.278469 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:18.278990 kubelet[1563]: E1002 19:17:18.278967 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:19.279453 kubelet[1563]: E1002 19:17:19.279428 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:20.280403 kubelet[1563]: E1002 19:17:20.280378 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:21.281434 kubelet[1563]: E1002 19:17:21.281415 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:22.253223 kubelet[1563]: E1002 19:17:22.253171 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:22.282601 kubelet[1563]: E1002 19:17:22.282571 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:23.282881 kubelet[1563]: E1002 19:17:23.282849 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:24.283272 kubelet[1563]: E1002 19:17:24.283235 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:25.283946 kubelet[1563]: E1002 19:17:25.283926 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:26.284808 kubelet[1563]: E1002 19:17:26.284770 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:27.169145 kubelet[1563]: E1002 19:17:27.169113 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:27.253470 kubelet[1563]: E1002 19:17:27.253454 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:27.285138 kubelet[1563]: E1002 19:17:27.285117 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:28.277811 kubelet[1563]: E1002 19:17:28.277790 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:17:28.285730 kubelet[1563]: E1002 19:17:28.285697 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:29.286447 kubelet[1563]: E1002 19:17:29.286412 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:30.287420 kubelet[1563]: E1002 19:17:30.287375 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:31.287831 kubelet[1563]: E1002 19:17:31.287804 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:32.254098 kubelet[1563]: E1002 19:17:32.254079 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:32.288888 kubelet[1563]: E1002 19:17:32.288854 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:33.290043 kubelet[1563]: E1002 19:17:33.290018 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:34.290372 kubelet[1563]: E1002 19:17:34.290345 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:35.291430 kubelet[1563]: E1002 19:17:35.291410 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:36.292511 kubelet[1563]: E1002 19:17:36.292485 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:37.254690 kubelet[1563]: E1002 19:17:37.254667 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:37.293643 kubelet[1563]: E1002 19:17:37.293600 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:38.294410 kubelet[1563]: E1002 19:17:38.294384 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:39.295484 kubelet[1563]: E1002 19:17:39.295464 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:40.296807 kubelet[1563]: E1002 19:17:40.296774 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:41.278747 kubelet[1563]: E1002 19:17:41.278727 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-vcp76_kube-system(c818cc1e-986d-420e-8a41-56984e15e30f)\"" pod="kube-system/cilium-vcp76" podUID=c818cc1e-986d-420e-8a41-56984e15e30f Oct 2 19:17:41.297527 kubelet[1563]: E1002 19:17:41.297501 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:42.255362 kubelet[1563]: E1002 19:17:42.255342 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:42.298926 kubelet[1563]: E1002 19:17:42.298902 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:43.299748 kubelet[1563]: E1002 19:17:43.299721 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:44.300279 kubelet[1563]: E1002 19:17:44.300242 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:45.301075 kubelet[1563]: E1002 19:17:45.301039 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:46.301764 kubelet[1563]: E1002 19:17:46.301743 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:47.169652 kubelet[1563]: E1002 19:17:47.169598 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:47.256588 kubelet[1563]: E1002 19:17:47.256531 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:47.302629 kubelet[1563]: E1002 19:17:47.302600 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:48.303433 kubelet[1563]: E1002 19:17:48.303404 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:49.304724 kubelet[1563]: E1002 19:17:49.304686 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:50.305082 kubelet[1563]: E1002 19:17:50.305036 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:51.305697 kubelet[1563]: E1002 19:17:51.305665 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:52.256939 kubelet[1563]: E1002 19:17:52.256905 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:52.306583 kubelet[1563]: E1002 19:17:52.306553 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:53.306981 kubelet[1563]: E1002 19:17:53.306937 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:54.307648 kubelet[1563]: E1002 19:17:54.307626 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:54.468749 env[1145]: time="2023-10-02T19:17:54.467633441Z" level=info msg="StopPodSandbox for \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\"" Oct 2 19:17:54.468749 env[1145]: time="2023-10-02T19:17:54.467697788Z" level=info msg="Container to stop \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:17:54.469749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad-shm.mount: Deactivated successfully. Oct 2 19:17:54.476060 systemd[1]: cri-containerd-a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad.scope: Deactivated successfully. Oct 2 19:17:54.477744 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:17:54.477836 kernel: audit: type=1334 audit(1696274274.474:639): prog-id=61 op=UNLOAD Oct 2 19:17:54.474000 audit: BPF prog-id=61 op=UNLOAD Oct 2 19:17:54.480000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:17:54.483761 kernel: audit: type=1334 audit(1696274274.480:640): prog-id=67 op=UNLOAD Oct 2 19:17:54.494102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad-rootfs.mount: Deactivated successfully. Oct 2 19:17:54.503392 env[1145]: time="2023-10-02T19:17:54.503352432Z" level=info msg="shim disconnected" id=a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad Oct 2 19:17:54.503536 env[1145]: time="2023-10-02T19:17:54.503386313Z" level=warning msg="cleaning up after shim disconnected" id=a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad namespace=k8s.io Oct 2 19:17:54.503536 env[1145]: time="2023-10-02T19:17:54.503408231Z" level=info msg="cleaning up dead shim" Oct 2 19:17:54.509148 env[1145]: time="2023-10-02T19:17:54.509108247Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:17:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2088 runtime=io.containerd.runc.v2\n" Oct 2 19:17:54.509332 env[1145]: time="2023-10-02T19:17:54.509312687Z" level=info msg="TearDown network for sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" successfully" Oct 2 19:17:54.509365 env[1145]: time="2023-10-02T19:17:54.509330184Z" level=info msg="StopPodSandbox for \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" returns successfully" Oct 2 19:17:54.519051 kubelet[1563]: I1002 19:17:54.519030 1563 scope.go:115] "RemoveContainer" containerID="9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe" Oct 2 19:17:54.519853 env[1145]: time="2023-10-02T19:17:54.519833993Z" level=info msg="RemoveContainer for \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\"" Oct 2 19:17:54.521223 env[1145]: time="2023-10-02T19:17:54.521206439Z" level=info msg="RemoveContainer for \"9fa46b79b8b1bee387a27226cd3354f9d9ba6d0183d59a87bccbef6aaa1fa3fe\" returns successfully" Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605774 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-run\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605819 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-lib-modules\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605833 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-xtables-lock\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605841 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605851 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-config-path\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605874 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cni-path\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605880 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605887 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-cgroup\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605889 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cni-path" (OuterVolumeSpecName: "cni-path") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605897 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-bpf-maps\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605906 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-net\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605918 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-hostproc\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605930 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-kernel\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605950 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gjsm\" (UniqueName: \"kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-kube-api-access-7gjsm\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605963 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c818cc1e-986d-420e-8a41-56984e15e30f-clustermesh-secrets\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607536 kubelet[1563]: I1002 19:17:54.605977 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-etc-cni-netd\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.605989 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-hubble-tls\") pod \"c818cc1e-986d-420e-8a41-56984e15e30f\" (UID: \"c818cc1e-986d-420e-8a41-56984e15e30f\") " Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606003 1563 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cni-path\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606010 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-run\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.607994 kubelet[1563]: W1002 19:17:54.605996 1563 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c818cc1e-986d-420e-8a41-56984e15e30f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606024 1563 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-lib-modules\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.605873 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606433 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606449 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606458 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606472 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-hostproc" (OuterVolumeSpecName: "hostproc") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606487 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.606954 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:17:54.607994 kubelet[1563]: I1002 19:17:54.607077 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:17:54.613098 systemd[1]: var-lib-kubelet-pods-c818cc1e\x2d986d\x2d420e\x2d8a41\x2d56984e15e30f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:17:54.615256 systemd[1]: var-lib-kubelet-pods-c818cc1e\x2d986d\x2d420e\x2d8a41\x2d56984e15e30f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7gjsm.mount: Deactivated successfully. Oct 2 19:17:54.615306 systemd[1]: var-lib-kubelet-pods-c818cc1e\x2d986d\x2d420e\x2d8a41\x2d56984e15e30f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:17:54.616013 kubelet[1563]: I1002 19:17:54.615998 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c818cc1e-986d-420e-8a41-56984e15e30f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:17:54.616116 kubelet[1563]: I1002 19:17:54.616105 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:17:54.616203 kubelet[1563]: I1002 19:17:54.616192 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-kube-api-access-7gjsm" (OuterVolumeSpecName: "kube-api-access-7gjsm") pod "c818cc1e-986d-420e-8a41-56984e15e30f" (UID: "c818cc1e-986d-420e-8a41-56984e15e30f"). InnerVolumeSpecName "kube-api-access-7gjsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:17:54.706480 kubelet[1563]: I1002 19:17:54.706451 1563 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-xtables-lock\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.706597 kubelet[1563]: I1002 19:17:54.706589 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-config-path\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.706664 kubelet[1563]: I1002 19:17:54.706657 1563 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-bpf-maps\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.706737 kubelet[1563]: I1002 19:17:54.706708 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-cilium-cgroup\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.706797 kubelet[1563]: I1002 19:17:54.706790 1563 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-net\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.706848 kubelet[1563]: I1002 19:17:54.706841 1563 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-hostproc\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.706914 kubelet[1563]: I1002 19:17:54.706905 1563 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-host-proc-sys-kernel\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.706974 kubelet[1563]: I1002 19:17:54.706967 1563 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7gjsm\" (UniqueName: \"kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-kube-api-access-7gjsm\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.707031 kubelet[1563]: I1002 19:17:54.707024 1563 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c818cc1e-986d-420e-8a41-56984e15e30f-etc-cni-netd\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.707078 kubelet[1563]: I1002 19:17:54.707072 1563 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c818cc1e-986d-420e-8a41-56984e15e30f-hubble-tls\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.707133 kubelet[1563]: I1002 19:17:54.707127 1563 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c818cc1e-986d-420e-8a41-56984e15e30f-clustermesh-secrets\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:17:54.822109 systemd[1]: Removed slice kubepods-burstable-podc818cc1e_986d_420e_8a41_56984e15e30f.slice. Oct 2 19:17:55.278919 kubelet[1563]: I1002 19:17:55.278898 1563 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c818cc1e-986d-420e-8a41-56984e15e30f path="/var/lib/kubelet/pods/c818cc1e-986d-420e-8a41-56984e15e30f/volumes" Oct 2 19:17:55.309038 kubelet[1563]: E1002 19:17:55.309003 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:56.309795 kubelet[1563]: E1002 19:17:56.309767 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:56.939117 kubelet[1563]: I1002 19:17:56.939073 1563 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:17:56.939237 kubelet[1563]: E1002 19:17:56.939131 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: E1002 19:17:56.939139 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: E1002 19:17:56.939143 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: I1002 19:17:56.939171 1563 memory_manager.go:346] "RemoveStaleState removing state" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: I1002 19:17:56.939177 1563 memory_manager.go:346] "RemoveStaleState removing state" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: I1002 19:17:56.939182 1563 memory_manager.go:346] "RemoveStaleState removing state" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: E1002 19:17:56.939190 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: E1002 19:17:56.939196 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: I1002 19:17:56.939203 1563 memory_manager.go:346] "RemoveStaleState removing state" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.939237 kubelet[1563]: I1002 19:17:56.939215 1563 memory_manager.go:346] "RemoveStaleState removing state" podUID="c818cc1e-986d-420e-8a41-56984e15e30f" containerName="mount-cgroup" Oct 2 19:17:56.945088 systemd[1]: Created slice kubepods-burstable-pod5094cd08_b6e2_4fd3_86f6_cb0eef415303.slice. Oct 2 19:17:56.951598 kubelet[1563]: I1002 19:17:56.951572 1563 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:17:56.954549 systemd[1]: Created slice kubepods-besteffort-pod270089f2_e201_4dd4_ab8e_9f46ee7be306.slice. Oct 2 19:17:57.018308 kubelet[1563]: I1002 19:17:57.018269 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-kernel\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018308 kubelet[1563]: I1002 19:17:57.018306 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/270089f2-e201-4dd4-ab8e-9f46ee7be306-cilium-config-path\") pod \"cilium-operator-574c4bb98d-bngvn\" (UID: \"270089f2-e201-4dd4-ab8e-9f46ee7be306\") " pod="kube-system/cilium-operator-574c4bb98d-bngvn" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018323 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-run\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018337 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-lib-modules\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018375 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-net\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018390 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bdq\" (UniqueName: \"kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-kube-api-access-f6bdq\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018407 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-bpf-maps\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018422 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-cgroup\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018443 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-clustermesh-secrets\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018463 kubelet[1563]: I1002 19:17:57.018462 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-ipsec-secrets\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018654 kubelet[1563]: I1002 19:17:57.018477 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hostproc\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018654 kubelet[1563]: I1002 19:17:57.018489 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cni-path\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018654 kubelet[1563]: I1002 19:17:57.018501 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-config-path\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018654 kubelet[1563]: I1002 19:17:57.018514 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-etc-cni-netd\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018654 kubelet[1563]: I1002 19:17:57.018527 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-xtables-lock\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018654 kubelet[1563]: I1002 19:17:57.018539 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hubble-tls\") pod \"cilium-vtnf6\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " pod="kube-system/cilium-vtnf6" Oct 2 19:17:57.018654 kubelet[1563]: I1002 19:17:57.018553 1563 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56lk\" (UniqueName: \"kubernetes.io/projected/270089f2-e201-4dd4-ab8e-9f46ee7be306-kube-api-access-s56lk\") pod \"cilium-operator-574c4bb98d-bngvn\" (UID: \"270089f2-e201-4dd4-ab8e-9f46ee7be306\") " pod="kube-system/cilium-operator-574c4bb98d-bngvn" Oct 2 19:17:57.253926 env[1145]: time="2023-10-02T19:17:57.253327641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vtnf6,Uid:5094cd08-b6e2-4fd3-86f6-cb0eef415303,Namespace:kube-system,Attempt:0,}" Oct 2 19:17:57.258528 env[1145]: time="2023-10-02T19:17:57.258499301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-bngvn,Uid:270089f2-e201-4dd4-ab8e-9f46ee7be306,Namespace:kube-system,Attempt:0,}" Oct 2 19:17:57.260004 kubelet[1563]: E1002 19:17:57.259985 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:57.264667 env[1145]: time="2023-10-02T19:17:57.264612398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:17:57.264805 env[1145]: time="2023-10-02T19:17:57.264644967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:17:57.264805 env[1145]: time="2023-10-02T19:17:57.264652467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:17:57.264805 env[1145]: time="2023-10-02T19:17:57.264735098Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42 pid=2117 runtime=io.containerd.runc.v2 Oct 2 19:17:57.268228 env[1145]: time="2023-10-02T19:17:57.268182902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:17:57.268369 env[1145]: time="2023-10-02T19:17:57.268354513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:17:57.268441 env[1145]: time="2023-10-02T19:17:57.268426992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:17:57.268612 env[1145]: time="2023-10-02T19:17:57.268594019Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc pid=2128 runtime=io.containerd.runc.v2 Oct 2 19:17:57.272560 systemd[1]: Started cri-containerd-3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42.scope. Oct 2 19:17:57.283636 systemd[1]: Started cri-containerd-3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc.scope. Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.295687 kernel: audit: type=1400 audit(1696274277.288:641): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.295773 kernel: audit: type=1400 audit(1696274277.288:642): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.295792 kernel: audit: type=1400 audit(1696274277.288:643): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.303905 kernel: audit: type=1400 audit(1696274277.288:644): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.303984 kernel: audit: type=1400 audit(1696274277.288:645): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.304007 kernel: audit: type=1400 audit(1696274277.288:646): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.308730 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:17:57.308814 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:17:57.310399 kubelet[1563]: E1002 19:17:57.310382 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.289000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.289000 audit: BPF prog-id=72 op=LOAD Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2117 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:57.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353534386630626239643766626431393562323834376666653337 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2117 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:57.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353534386630626239643766626431393562323834376666653337 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.294000 audit: BPF prog-id=73 op=LOAD Oct 2 19:17:57.294000 audit[2133]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000024f80 items=0 ppid=2117 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:57.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353534386630626239643766626431393562323834376666653337 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit: BPF prog-id=74 op=LOAD Oct 2 19:17:57.297000 audit[2133]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000024fc8 items=0 ppid=2117 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:57.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353534386630626239643766626431393562323834376666653337 Oct 2 19:17:57.297000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:17:57.297000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { perfmon } for pid=2133 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit[2133]: AVC avc: denied { bpf } for pid=2133 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.297000 audit: BPF prog-id=75 op=LOAD Oct 2 19:17:57.297000 audit[2133]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0000253d8 items=0 ppid=2117 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:57.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353534386630626239643766626431393562323834376666653337 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:57.318936 env[1145]: time="2023-10-02T19:17:57.318906438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vtnf6,Uid:5094cd08-b6e2-4fd3-86f6-cb0eef415303,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\"" Oct 2 19:17:57.320878 env[1145]: time="2023-10-02T19:17:57.320857662Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:17:57.329202 env[1145]: time="2023-10-02T19:17:57.329166219Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\"" Oct 2 19:17:57.329663 env[1145]: time="2023-10-02T19:17:57.329643837Z" level=info msg="StartContainer for \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\"" Oct 2 19:17:57.343344 systemd[1]: Started cri-containerd-004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282.scope. Oct 2 19:17:57.351798 env[1145]: time="2023-10-02T19:17:57.351772886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-bngvn,Uid:270089f2-e201-4dd4-ab8e-9f46ee7be306,Namespace:kube-system,Attempt:0,} returns sandbox id \"3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc\"" Oct 2 19:17:57.353175 systemd[1]: cri-containerd-004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282.scope: Deactivated successfully. Oct 2 19:17:57.353346 systemd[1]: Stopped cri-containerd-004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282.scope. Oct 2 19:17:57.354243 env[1145]: time="2023-10-02T19:17:57.354224637Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:17:57.362754 env[1145]: time="2023-10-02T19:17:57.362709311Z" level=info msg="shim disconnected" id=004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282 Oct 2 19:17:57.362881 env[1145]: time="2023-10-02T19:17:57.362869548Z" level=warning msg="cleaning up after shim disconnected" id=004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282 namespace=k8s.io Oct 2 19:17:57.362936 env[1145]: time="2023-10-02T19:17:57.362921244Z" level=info msg="cleaning up dead shim" Oct 2 19:17:57.368507 env[1145]: time="2023-10-02T19:17:57.368483540Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:17:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2215 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:17:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:17:57.368743 env[1145]: time="2023-10-02T19:17:57.368699261Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Oct 2 19:17:57.368862 env[1145]: time="2023-10-02T19:17:57.368815648Z" level=error msg="Failed to pipe stdout of container \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\"" error="reading from a closed fifo" Oct 2 19:17:57.368909 env[1145]: time="2023-10-02T19:17:57.368848520Z" level=error msg="Failed to pipe stderr of container \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\"" error="reading from a closed fifo" Oct 2 19:17:57.369305 env[1145]: time="2023-10-02T19:17:57.369280567Z" level=error msg="StartContainer for \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:17:57.369545 kubelet[1563]: E1002 19:17:57.369425 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282" Oct 2 19:17:57.369545 kubelet[1563]: E1002 19:17:57.369506 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:17:57.369545 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:17:57.369545 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:17:57.369545 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f6bdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:17:57.369545 kubelet[1563]: E1002 19:17:57.369531 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:17:57.528507 env[1145]: time="2023-10-02T19:17:57.526557380Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:17:57.534835 env[1145]: time="2023-10-02T19:17:57.534791691Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\"" Oct 2 19:17:57.535667 env[1145]: time="2023-10-02T19:17:57.535627049Z" level=info msg="StartContainer for \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\"" Oct 2 19:17:57.548549 systemd[1]: Started cri-containerd-a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d.scope. Oct 2 19:17:57.557179 systemd[1]: cri-containerd-a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d.scope: Deactivated successfully. Oct 2 19:17:57.557358 systemd[1]: Stopped cri-containerd-a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d.scope. Oct 2 19:17:57.562217 env[1145]: time="2023-10-02T19:17:57.562183573Z" level=info msg="shim disconnected" id=a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d Oct 2 19:17:57.562323 env[1145]: time="2023-10-02T19:17:57.562311759Z" level=warning msg="cleaning up after shim disconnected" id=a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d namespace=k8s.io Oct 2 19:17:57.562392 env[1145]: time="2023-10-02T19:17:57.562382801Z" level=info msg="cleaning up dead shim" Oct 2 19:17:57.567090 env[1145]: time="2023-10-02T19:17:57.567065719Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:17:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2256 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:17:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:17:57.567218 env[1145]: time="2023-10-02T19:17:57.567186959Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Oct 2 19:17:57.568764 env[1145]: time="2023-10-02T19:17:57.568743507Z" level=error msg="Failed to pipe stderr of container \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\"" error="reading from a closed fifo" Oct 2 19:17:57.568843 env[1145]: time="2023-10-02T19:17:57.568828562Z" level=error msg="Failed to pipe stdout of container \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\"" error="reading from a closed fifo" Oct 2 19:17:57.569329 env[1145]: time="2023-10-02T19:17:57.569311485Z" level=error msg="StartContainer for \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:17:57.569795 kubelet[1563]: E1002 19:17:57.569493 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d" Oct 2 19:17:57.569795 kubelet[1563]: E1002 19:17:57.569561 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:17:57.569795 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:17:57.569795 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:17:57.569795 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f6bdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:17:57.569795 kubelet[1563]: E1002 19:17:57.569583 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:17:58.310698 kubelet[1563]: E1002 19:17:58.310661 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:58.475824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819727735.mount: Deactivated successfully. Oct 2 19:17:58.526340 kubelet[1563]: I1002 19:17:58.526318 1563 scope.go:115] "RemoveContainer" containerID="004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282" Oct 2 19:17:58.526525 kubelet[1563]: I1002 19:17:58.526513 1563 scope.go:115] "RemoveContainer" containerID="004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282" Oct 2 19:17:58.527059 env[1145]: time="2023-10-02T19:17:58.527041056Z" level=info msg="RemoveContainer for \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\"" Oct 2 19:17:58.528321 env[1145]: time="2023-10-02T19:17:58.528306220Z" level=info msg="RemoveContainer for \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\" returns successfully" Oct 2 19:17:58.528443 env[1145]: time="2023-10-02T19:17:58.528432084Z" level=info msg="RemoveContainer for \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\"" Oct 2 19:17:58.528496 env[1145]: time="2023-10-02T19:17:58.528484518Z" level=info msg="RemoveContainer for \"004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282\" returns successfully" Oct 2 19:17:58.528721 kubelet[1563]: E1002 19:17:58.528700 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303)\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:17:58.918408 env[1145]: time="2023-10-02T19:17:58.918378157Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:17:58.919024 env[1145]: time="2023-10-02T19:17:58.919012104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:17:58.919749 env[1145]: time="2023-10-02T19:17:58.919736681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:17:58.920074 env[1145]: time="2023-10-02T19:17:58.920055706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:17:58.921269 env[1145]: time="2023-10-02T19:17:58.921253353Z" level=info msg="CreateContainer within sandbox \"3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:17:58.926731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860338738.mount: Deactivated successfully. Oct 2 19:17:58.938692 env[1145]: time="2023-10-02T19:17:58.938666871Z" level=info msg="CreateContainer within sandbox \"3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\"" Oct 2 19:17:58.939018 env[1145]: time="2023-10-02T19:17:58.938992196Z" level=info msg="StartContainer for \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\"" Oct 2 19:17:58.948988 systemd[1]: Started cri-containerd-5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9.scope. Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit: BPF prog-id=80 op=LOAD Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2128 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:58.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535363038353264363161393834383436353061356232323832636531 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2128 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:58.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535363038353264363161393834383436353061356232323832636531 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.958000 audit: BPF prog-id=81 op=LOAD Oct 2 19:17:58.958000 audit[2277]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000344120 items=0 ppid=2128 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:58.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535363038353264363161393834383436353061356232323832636531 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit: BPF prog-id=82 op=LOAD Oct 2 19:17:58.959000 audit[2277]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000344168 items=0 ppid=2128 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:58.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535363038353264363161393834383436353061356232323832636531 Oct 2 19:17:58.959000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:17:58.959000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:58.959000 audit: BPF prog-id=83 op=LOAD Oct 2 19:17:58.959000 audit[2277]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000344578 items=0 ppid=2128 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:58.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535363038353264363161393834383436353061356232323832636531 Oct 2 19:17:58.969906 env[1145]: time="2023-10-02T19:17:58.969882652Z" level=info msg="StartContainer for \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\" returns successfully" Oct 2 19:17:58.980000 audit[2288]: AVC avc: denied { map_create } for pid=2288 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c735,c770 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c735,c770 tclass=bpf permissive=0 Oct 2 19:17:58.980000 audit[2288]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00053b7d0 a2=48 a3=c00053b7c0 items=0 ppid=2128 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c735,c770 key=(null) Oct 2 19:17:58.980000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:17:59.311691 kubelet[1563]: E1002 19:17:59.311608 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:59.534145 kubelet[1563]: I1002 19:17:59.534074 1563 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-bngvn" podStartSLOduration=1.966767095 podCreationTimestamp="2023-10-02 19:17:56 +0000 UTC" firstStartedPulling="2023-10-02 19:17:57.352957773 +0000 UTC m=+190.458584184" lastFinishedPulling="2023-10-02 19:17:58.920229998 +0000 UTC m=+192.025856408" observedRunningTime="2023-10-02 19:17:59.533856683 +0000 UTC m=+192.639483102" watchObservedRunningTime="2023-10-02 19:17:59.534039319 +0000 UTC m=+192.639665738" Oct 2 19:18:00.311785 kubelet[1563]: E1002 19:18:00.311758 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:00.465255 kubelet[1563]: W1002 19:18:00.465179 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5094cd08_b6e2_4fd3_86f6_cb0eef415303.slice/cri-containerd-004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282.scope WatchSource:0}: container "004bf174e532836d10b4038e622ed00a4bb4500ef5bd296de7231b8236d24282" in namespace "k8s.io": not found Oct 2 19:18:01.312206 kubelet[1563]: E1002 19:18:01.312173 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:02.261417 kubelet[1563]: E1002 19:18:02.261398 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:02.312929 kubelet[1563]: E1002 19:18:02.312906 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:03.313944 kubelet[1563]: E1002 19:18:03.313915 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:03.571203 kubelet[1563]: W1002 19:18:03.571128 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5094cd08_b6e2_4fd3_86f6_cb0eef415303.slice/cri-containerd-a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d.scope WatchSource:0}: task a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d not found: not found Oct 2 19:18:04.314842 kubelet[1563]: E1002 19:18:04.314801 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:05.315496 kubelet[1563]: E1002 19:18:05.315427 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:06.315766 kubelet[1563]: E1002 19:18:06.315731 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:07.169481 kubelet[1563]: E1002 19:18:07.169452 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:07.262454 kubelet[1563]: E1002 19:18:07.262431 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:07.316379 kubelet[1563]: E1002 19:18:07.316346 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:08.316968 kubelet[1563]: E1002 19:18:08.316937 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:09.317396 kubelet[1563]: E1002 19:18:09.317366 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.280045 env[1145]: time="2023-10-02T19:18:10.279934448Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:18:10.286616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707048800.mount: Deactivated successfully. Oct 2 19:18:10.290369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334892241.mount: Deactivated successfully. Oct 2 19:18:10.292327 env[1145]: time="2023-10-02T19:18:10.292300931Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\"" Oct 2 19:18:10.292631 env[1145]: time="2023-10-02T19:18:10.292617576Z" level=info msg="StartContainer for \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\"" Oct 2 19:18:10.305851 systemd[1]: Started cri-containerd-b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd.scope. Oct 2 19:18:10.313605 systemd[1]: cri-containerd-b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd.scope: Deactivated successfully. Oct 2 19:18:10.313786 systemd[1]: Stopped cri-containerd-b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd.scope. Oct 2 19:18:10.318389 kubelet[1563]: E1002 19:18:10.318364 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.528861 env[1145]: time="2023-10-02T19:18:10.528814212Z" level=info msg="shim disconnected" id=b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd Oct 2 19:18:10.528861 env[1145]: time="2023-10-02T19:18:10.528856930Z" level=warning msg="cleaning up after shim disconnected" id=b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd namespace=k8s.io Oct 2 19:18:10.528861 env[1145]: time="2023-10-02T19:18:10.528865066Z" level=info msg="cleaning up dead shim" Oct 2 19:18:10.535390 env[1145]: time="2023-10-02T19:18:10.534999829Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2331 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:10.535669 env[1145]: time="2023-10-02T19:18:10.535632594Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:18:10.536245 env[1145]: time="2023-10-02T19:18:10.535891134Z" level=error msg="Failed to pipe stdout of container \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\"" error="reading from a closed fifo" Oct 2 19:18:10.536972 env[1145]: time="2023-10-02T19:18:10.536947100Z" level=error msg="Failed to pipe stderr of container \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\"" error="reading from a closed fifo" Oct 2 19:18:10.541034 env[1145]: time="2023-10-02T19:18:10.541005702Z" level=error msg="StartContainer for \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:10.541195 kubelet[1563]: E1002 19:18:10.541176 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd" Oct 2 19:18:10.541511 kubelet[1563]: E1002 19:18:10.541493 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:10.541511 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:10.541511 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:18:10.541511 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f6bdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:10.541690 kubelet[1563]: E1002 19:18:10.541541 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:18:10.543424 kubelet[1563]: I1002 19:18:10.543360 1563 scope.go:115] "RemoveContainer" containerID="a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d" Oct 2 19:18:10.543568 kubelet[1563]: I1002 19:18:10.543551 1563 scope.go:115] "RemoveContainer" containerID="a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d" Oct 2 19:18:10.544810 env[1145]: time="2023-10-02T19:18:10.544784473Z" level=info msg="RemoveContainer for \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\"" Oct 2 19:18:10.545036 env[1145]: time="2023-10-02T19:18:10.545013582Z" level=info msg="RemoveContainer for \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\"" Oct 2 19:18:10.545094 env[1145]: time="2023-10-02T19:18:10.545070709Z" level=error msg="RemoveContainer for \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\" failed" error="failed to set removing state for container \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\": container is already in removing state" Oct 2 19:18:10.545242 kubelet[1563]: E1002 19:18:10.545198 1563 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\": container is already in removing state" containerID="a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d" Oct 2 19:18:10.545242 kubelet[1563]: I1002 19:18:10.545230 1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d} err="rpc error: code = Unknown desc = failed to set removing state for container \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\": container is already in removing state" Oct 2 19:18:10.546975 env[1145]: time="2023-10-02T19:18:10.546950391Z" level=info msg="RemoveContainer for \"a7e7abdc00c1cfc2e6a234364f4a46219e3a5ed1aeabc8315e4b47beb84b000d\" returns successfully" Oct 2 19:18:10.547744 kubelet[1563]: E1002 19:18:10.547473 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303)\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:18:11.285080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd-rootfs.mount: Deactivated successfully. Oct 2 19:18:11.319288 kubelet[1563]: E1002 19:18:11.319256 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:12.263007 kubelet[1563]: E1002 19:18:12.262988 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:12.319863 kubelet[1563]: E1002 19:18:12.319840 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.320777 kubelet[1563]: E1002 19:18:13.320748 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.633524 kubelet[1563]: W1002 19:18:13.633315 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5094cd08_b6e2_4fd3_86f6_cb0eef415303.slice/cri-containerd-b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd.scope WatchSource:0}: task b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd not found: not found Oct 2 19:18:14.321667 kubelet[1563]: E1002 19:18:14.321635 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:15.322645 kubelet[1563]: E1002 19:18:15.322623 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:16.323603 kubelet[1563]: E1002 19:18:16.323578 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:17.263657 kubelet[1563]: E1002 19:18:17.263642 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:17.324240 kubelet[1563]: E1002 19:18:17.324216 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:18.324587 kubelet[1563]: E1002 19:18:18.324554 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:19.325156 kubelet[1563]: E1002 19:18:19.325134 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:20.326396 kubelet[1563]: E1002 19:18:20.326369 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:21.326862 kubelet[1563]: E1002 19:18:21.326840 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:22.265105 kubelet[1563]: E1002 19:18:22.265089 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:22.327784 kubelet[1563]: E1002 19:18:22.327759 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:23.279216 kubelet[1563]: E1002 19:18:23.279193 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303)\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:18:23.328263 kubelet[1563]: E1002 19:18:23.328208 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:24.328736 kubelet[1563]: E1002 19:18:24.328699 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:25.329532 kubelet[1563]: E1002 19:18:25.329506 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:26.330285 kubelet[1563]: E1002 19:18:26.330253 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:27.169444 kubelet[1563]: E1002 19:18:27.169413 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:27.266304 kubelet[1563]: E1002 19:18:27.266285 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:27.331028 kubelet[1563]: E1002 19:18:27.331005 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:28.331606 kubelet[1563]: E1002 19:18:28.331577 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:29.332638 kubelet[1563]: E1002 19:18:29.332612 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:30.333837 kubelet[1563]: E1002 19:18:30.333802 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:31.334338 kubelet[1563]: E1002 19:18:31.334302 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:32.266726 kubelet[1563]: E1002 19:18:32.266696 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:32.334580 kubelet[1563]: E1002 19:18:32.334565 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:33.335767 kubelet[1563]: E1002 19:18:33.335696 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:34.336482 kubelet[1563]: E1002 19:18:34.336455 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:35.337408 kubelet[1563]: E1002 19:18:35.337381 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:36.337702 kubelet[1563]: E1002 19:18:36.337675 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:37.267244 kubelet[1563]: E1002 19:18:37.267214 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:37.279191 env[1145]: time="2023-10-02T19:18:37.279099992Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:18:37.283513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052968780.mount: Deactivated successfully. Oct 2 19:18:37.286422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973690068.mount: Deactivated successfully. Oct 2 19:18:37.288144 env[1145]: time="2023-10-02T19:18:37.288113607Z" level=info msg="CreateContainer within sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\"" Oct 2 19:18:37.288691 env[1145]: time="2023-10-02T19:18:37.288672753Z" level=info msg="StartContainer for \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\"" Oct 2 19:18:37.307576 systemd[1]: Started cri-containerd-65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076.scope. Oct 2 19:18:37.314159 systemd[1]: cri-containerd-65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076.scope: Deactivated successfully. Oct 2 19:18:37.314326 systemd[1]: Stopped cri-containerd-65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076.scope. Oct 2 19:18:37.318508 env[1145]: time="2023-10-02T19:18:37.318475707Z" level=info msg="shim disconnected" id=65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076 Oct 2 19:18:37.318508 env[1145]: time="2023-10-02T19:18:37.318508392Z" level=warning msg="cleaning up after shim disconnected" id=65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076 namespace=k8s.io Oct 2 19:18:37.318621 env[1145]: time="2023-10-02T19:18:37.318553655Z" level=info msg="cleaning up dead shim" Oct 2 19:18:37.323270 env[1145]: time="2023-10-02T19:18:37.323248128Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2368 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:37.323482 env[1145]: time="2023-10-02T19:18:37.323452108Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed" Oct 2 19:18:37.324412 env[1145]: time="2023-10-02T19:18:37.324386189Z" level=error msg="Failed to pipe stderr of container \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\"" error="reading from a closed fifo" Oct 2 19:18:37.324503 env[1145]: time="2023-10-02T19:18:37.324475947Z" level=error msg="Failed to pipe stdout of container \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\"" error="reading from a closed fifo" Oct 2 19:18:37.324978 env[1145]: time="2023-10-02T19:18:37.324955805Z" level=error msg="StartContainer for \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:37.325120 kubelet[1563]: E1002 19:18:37.325105 1563 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076" Oct 2 19:18:37.325177 kubelet[1563]: E1002 19:18:37.325172 1563 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:37.325177 kubelet[1563]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:37.325177 kubelet[1563]: rm /hostbin/cilium-mount Oct 2 19:18:37.325177 kubelet[1563]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f6bdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:37.325274 kubelet[1563]: E1002 19:18:37.325197 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:18:37.338166 kubelet[1563]: E1002 19:18:37.338139 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:37.577934 kubelet[1563]: I1002 19:18:37.577396 1563 scope.go:115] "RemoveContainer" containerID="b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd" Oct 2 19:18:37.577934 kubelet[1563]: I1002 19:18:37.577636 1563 scope.go:115] "RemoveContainer" containerID="b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd" Oct 2 19:18:37.578924 env[1145]: time="2023-10-02T19:18:37.578904950Z" level=info msg="RemoveContainer for \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\"" Oct 2 19:18:37.579729 env[1145]: time="2023-10-02T19:18:37.579566733Z" level=info msg="RemoveContainer for \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\"" Oct 2 19:18:37.579780 env[1145]: time="2023-10-02T19:18:37.579666386Z" level=error msg="RemoveContainer for \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\" failed" error="failed to set removing state for container \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\": container is already in removing state" Oct 2 19:18:37.580114 kubelet[1563]: E1002 19:18:37.579815 1563 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\": container is already in removing state" containerID="b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd" Oct 2 19:18:37.580114 kubelet[1563]: E1002 19:18:37.579843 1563 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd": container is already in removing state; Skipping pod "cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303)" Oct 2 19:18:37.580114 kubelet[1563]: E1002 19:18:37.580017 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303)\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:18:37.584544 env[1145]: time="2023-10-02T19:18:37.584524338Z" level=info msg="RemoveContainer for \"b335c1f9b00ea167054f9e9b2d9d59ad65b0f0557e1eb904ff30a5e27b6164fd\" returns successfully" Oct 2 19:18:38.282400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076-rootfs.mount: Deactivated successfully. Oct 2 19:18:38.339004 kubelet[1563]: E1002 19:18:38.338979 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:39.339852 kubelet[1563]: E1002 19:18:39.339818 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.340370 kubelet[1563]: E1002 19:18:40.340338 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.422683 kubelet[1563]: W1002 19:18:40.422588 1563 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5094cd08_b6e2_4fd3_86f6_cb0eef415303.slice/cri-containerd-65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076.scope WatchSource:0}: task 65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076 not found: not found Oct 2 19:18:41.340873 kubelet[1563]: E1002 19:18:41.340842 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:42.268269 kubelet[1563]: E1002 19:18:42.268236 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:42.341137 kubelet[1563]: E1002 19:18:42.341107 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:43.341886 kubelet[1563]: E1002 19:18:43.341823 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:44.342864 kubelet[1563]: E1002 19:18:44.342831 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:45.343919 kubelet[1563]: E1002 19:18:45.343894 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:46.344785 kubelet[1563]: E1002 19:18:46.344744 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:47.168574 kubelet[1563]: E1002 19:18:47.168535 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:47.185904 env[1145]: time="2023-10-02T19:18:47.185700334Z" level=info msg="StopPodSandbox for \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\"" Oct 2 19:18:47.185904 env[1145]: time="2023-10-02T19:18:47.185803529Z" level=info msg="TearDown network for sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" successfully" Oct 2 19:18:47.185904 env[1145]: time="2023-10-02T19:18:47.185859195Z" level=info msg="StopPodSandbox for \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" returns successfully" Oct 2 19:18:47.187233 env[1145]: time="2023-10-02T19:18:47.186520128Z" level=info msg="RemovePodSandbox for \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\"" Oct 2 19:18:47.187233 env[1145]: time="2023-10-02T19:18:47.186540072Z" level=info msg="Forcibly stopping sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\"" Oct 2 19:18:47.187233 env[1145]: time="2023-10-02T19:18:47.186586939Z" level=info msg="TearDown network for sandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" successfully" Oct 2 19:18:47.195308 env[1145]: time="2023-10-02T19:18:47.195226741Z" level=info msg="RemovePodSandbox \"a5e40a0e9b6f47d3b19b2f9e2b9d7a355be1a6e85c3f5bf144a98230832309ad\" returns successfully" Oct 2 19:18:47.269320 kubelet[1563]: E1002 19:18:47.269301 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:47.345665 kubelet[1563]: E1002 19:18:47.345633 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.346535 kubelet[1563]: E1002 19:18:48.346489 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:49.347391 kubelet[1563]: E1002 19:18:49.347368 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:50.348016 kubelet[1563]: E1002 19:18:50.347990 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.348589 kubelet[1563]: E1002 19:18:51.348551 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:52.270554 kubelet[1563]: E1002 19:18:52.270537 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:52.349210 kubelet[1563]: E1002 19:18:52.349185 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.279701 kubelet[1563]: E1002 19:18:53.279674 1563 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vtnf6_kube-system(5094cd08-b6e2-4fd3-86f6-cb0eef415303)\"" pod="kube-system/cilium-vtnf6" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 Oct 2 19:18:53.349648 kubelet[1563]: E1002 19:18:53.349624 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:54.350297 kubelet[1563]: E1002 19:18:54.350272 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:55.350764 kubelet[1563]: E1002 19:18:55.350737 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.351470 kubelet[1563]: E1002 19:18:56.351423 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:57.274305 kubelet[1563]: E1002 19:18:57.274283 1563 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:57.351527 kubelet[1563]: E1002 19:18:57.351497 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:57.956930 env[1145]: time="2023-10-02T19:18:57.956892156Z" level=info msg="StopPodSandbox for \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\"" Oct 2 19:18:57.958140 env[1145]: time="2023-10-02T19:18:57.956956321Z" level=info msg="Container to stop \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:18:57.958081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42-shm.mount: Deactivated successfully. Oct 2 19:18:57.963472 systemd[1]: cri-containerd-3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42.scope: Deactivated successfully. Oct 2 19:18:57.961000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:18:57.965161 kernel: kauditd_printk_skb: 230 callbacks suppressed Oct 2 19:18:57.965233 kernel: audit: type=1334 audit(1696274337.961:688): prog-id=72 op=UNLOAD Oct 2 19:18:57.968000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:18:57.971816 kernel: audit: type=1334 audit(1696274337.968:689): prog-id=75 op=UNLOAD Oct 2 19:18:57.982399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42-rootfs.mount: Deactivated successfully. Oct 2 19:18:57.986378 env[1145]: time="2023-10-02T19:18:57.986341117Z" level=info msg="StopContainer for \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\" with timeout 30 (s)" Oct 2 19:18:57.986806 env[1145]: time="2023-10-02T19:18:57.986784653Z" level=info msg="Stop container \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\" with signal terminated" Oct 2 19:18:57.987126 env[1145]: time="2023-10-02T19:18:57.987099108Z" level=info msg="shim disconnected" id=3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42 Oct 2 19:18:57.987170 env[1145]: time="2023-10-02T19:18:57.987127286Z" level=warning msg="cleaning up after shim disconnected" id=3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42 namespace=k8s.io Oct 2 19:18:57.987170 env[1145]: time="2023-10-02T19:18:57.987136909Z" level=info msg="cleaning up dead shim" Oct 2 19:18:57.994774 systemd[1]: cri-containerd-5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9.scope: Deactivated successfully. Oct 2 19:18:57.993000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:18:57.996766 kernel: audit: type=1334 audit(1696274337.993:690): prog-id=80 op=UNLOAD Oct 2 19:18:57.996000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:18:57.999664 env[1145]: time="2023-10-02T19:18:57.999630782Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2404 runtime=io.containerd.runc.v2\n" Oct 2 19:18:57.999817 kernel: audit: type=1334 audit(1696274337.996:691): prog-id=83 op=UNLOAD Oct 2 19:18:57.999967 env[1145]: time="2023-10-02T19:18:57.999944108Z" level=info msg="TearDown network for sandbox \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" successfully" Oct 2 19:18:58.000002 env[1145]: time="2023-10-02T19:18:57.999963877Z" level=info msg="StopPodSandbox for \"3e5548f0bb9d7fbd195b2847ffe3760a7d24a243fc156c6ff4c5a8e19ad5de42\" returns successfully" Oct 2 19:18:58.011029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9-rootfs.mount: Deactivated successfully. Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032421 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-config-path\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032449 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-xtables-lock\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032461 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-kernel\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032470 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-net\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032484 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-bpf-maps\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032494 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-cgroup\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032508 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-ipsec-secrets\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032519 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hostproc\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032528 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-etc-cni-netd\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032538 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-lib-modules\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032550 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6bdq\" (UniqueName: \"kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-kube-api-access-f6bdq\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032560 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cni-path\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032571 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-clustermesh-secrets\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032582 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hubble-tls\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032596 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-run\") pod \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\" (UID: \"5094cd08-b6e2-4fd3-86f6-cb0eef415303\") " Oct 2 19:18:58.032803 kubelet[1563]: I1002 19:18:58.032629 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032700 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hostproc" (OuterVolumeSpecName: "hostproc") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032723 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032738 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032747 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032755 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032763 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032814 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.032832 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.033126 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cni-path" (OuterVolumeSpecName: "cni-path") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:58.037254 kubelet[1563]: W1002 19:18:58.035667 1563 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5094cd08-b6e2-4fd3-86f6-cb0eef415303/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:18:58.037254 kubelet[1563]: I1002 19:18:58.036609 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:18:58.037524 env[1145]: time="2023-10-02T19:18:58.035986747Z" level=info msg="shim disconnected" id=5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9 Oct 2 19:18:58.037524 env[1145]: time="2023-10-02T19:18:58.036020720Z" level=warning msg="cleaning up after shim disconnected" id=5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9 namespace=k8s.io Oct 2 19:18:58.037524 env[1145]: time="2023-10-02T19:18:58.036027434Z" level=info msg="cleaning up dead shim" Oct 2 19:18:58.041513 env[1145]: time="2023-10-02T19:18:58.041491679Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2437 runtime=io.containerd.runc.v2\n" Oct 2 19:18:58.044711 systemd[1]: var-lib-kubelet-pods-5094cd08\x2db6e2\x2d4fd3\x2d86f6\x2dcb0eef415303-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:18:58.046093 systemd[1]: var-lib-kubelet-pods-5094cd08\x2db6e2\x2d4fd3\x2d86f6\x2dcb0eef415303-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:18:58.046366 kubelet[1563]: I1002 19:18:58.046351 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:58.046944 kubelet[1563]: I1002 19:18:58.046931 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:18:58.047211 kubelet[1563]: I1002 19:18:58.047193 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:18:58.047492 kubelet[1563]: I1002 19:18:58.047476 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-kube-api-access-f6bdq" (OuterVolumeSpecName: "kube-api-access-f6bdq") pod "5094cd08-b6e2-4fd3-86f6-cb0eef415303" (UID: "5094cd08-b6e2-4fd3-86f6-cb0eef415303"). InnerVolumeSpecName "kube-api-access-f6bdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:58.047614 env[1145]: time="2023-10-02T19:18:58.047592122Z" level=info msg="StopContainer for \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\" returns successfully" Oct 2 19:18:58.047940 env[1145]: time="2023-10-02T19:18:58.047921966Z" level=info msg="StopPodSandbox for \"3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc\"" Oct 2 19:18:58.047978 env[1145]: time="2023-10-02T19:18:58.047955518Z" level=info msg="Container to stop \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:18:58.051688 systemd[1]: cri-containerd-3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc.scope: Deactivated successfully. Oct 2 19:18:58.050000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:18:58.053735 kernel: audit: type=1334 audit(1696274338.050:692): prog-id=76 op=UNLOAD Oct 2 19:18:58.054000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:18:58.056771 kernel: audit: type=1334 audit(1696274338.054:693): prog-id=79 op=UNLOAD Oct 2 19:18:58.072010 env[1145]: time="2023-10-02T19:18:58.071976820Z" level=info msg="shim disconnected" id=3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc Oct 2 19:18:58.072527 env[1145]: time="2023-10-02T19:18:58.072515396Z" level=warning msg="cleaning up after shim disconnected" id=3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc namespace=k8s.io Oct 2 19:18:58.072589 env[1145]: time="2023-10-02T19:18:58.072573303Z" level=info msg="cleaning up dead shim" Oct 2 19:18:58.077041 env[1145]: time="2023-10-02T19:18:58.077020317Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2470 runtime=io.containerd.runc.v2\n" Oct 2 19:18:58.077189 env[1145]: time="2023-10-02T19:18:58.077173763Z" level=info msg="TearDown network for sandbox \"3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc\" successfully" Oct 2 19:18:58.077223 env[1145]: time="2023-10-02T19:18:58.077187860Z" level=info msg="StopPodSandbox for \"3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc\" returns successfully" Oct 2 19:18:58.133056 kubelet[1563]: I1002 19:18:58.133010 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/270089f2-e201-4dd4-ab8e-9f46ee7be306-cilium-config-path\") pod \"270089f2-e201-4dd4-ab8e-9f46ee7be306\" (UID: \"270089f2-e201-4dd4-ab8e-9f46ee7be306\") " Oct 2 19:18:58.133056 kubelet[1563]: I1002 19:18:58.133049 1563 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s56lk\" (UniqueName: \"kubernetes.io/projected/270089f2-e201-4dd4-ab8e-9f46ee7be306-kube-api-access-s56lk\") pod \"270089f2-e201-4dd4-ab8e-9f46ee7be306\" (UID: \"270089f2-e201-4dd4-ab8e-9f46ee7be306\") " Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133072 1563 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f6bdq\" (UniqueName: \"kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-kube-api-access-f6bdq\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133081 1563 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cni-path\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133088 1563 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-lib-modules\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133093 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-run\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133099 1563 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-clustermesh-secrets\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133104 1563 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hubble-tls\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133109 1563 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-bpf-maps\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133114 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-cgroup\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133120 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-ipsec-secrets\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133125 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5094cd08-b6e2-4fd3-86f6-cb0eef415303-cilium-config-path\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133131 1563 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-xtables-lock\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133136 1563 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-kernel\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133143 1563 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-host-proc-sys-net\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133148 1563 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-hostproc\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133212 kubelet[1563]: I1002 19:18:58.133153 1563 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5094cd08-b6e2-4fd3-86f6-cb0eef415303-etc-cni-netd\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.133558 kubelet[1563]: W1002 19:18:58.133394 1563 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/270089f2-e201-4dd4-ab8e-9f46ee7be306/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:18:58.134406 kubelet[1563]: I1002 19:18:58.134353 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/270089f2-e201-4dd4-ab8e-9f46ee7be306-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "270089f2-e201-4dd4-ab8e-9f46ee7be306" (UID: "270089f2-e201-4dd4-ab8e-9f46ee7be306"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:18:58.135324 kubelet[1563]: I1002 19:18:58.135313 1563 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270089f2-e201-4dd4-ab8e-9f46ee7be306-kube-api-access-s56lk" (OuterVolumeSpecName: "kube-api-access-s56lk") pod "270089f2-e201-4dd4-ab8e-9f46ee7be306" (UID: "270089f2-e201-4dd4-ab8e-9f46ee7be306"). InnerVolumeSpecName "kube-api-access-s56lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:58.234544 kubelet[1563]: I1002 19:18:58.233373 1563 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s56lk\" (UniqueName: \"kubernetes.io/projected/270089f2-e201-4dd4-ab8e-9f46ee7be306-kube-api-access-s56lk\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.234669 kubelet[1563]: I1002 19:18:58.234653 1563 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/270089f2-e201-4dd4-ab8e-9f46ee7be306-cilium-config-path\") on node \"10.67.124.139\" DevicePath \"\"" Oct 2 19:18:58.352573 kubelet[1563]: E1002 19:18:58.352520 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.604264 kubelet[1563]: I1002 19:18:58.604089 1563 scope.go:115] "RemoveContainer" containerID="65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076" Oct 2 19:18:58.607251 systemd[1]: Removed slice kubepods-burstable-pod5094cd08_b6e2_4fd3_86f6_cb0eef415303.slice. Oct 2 19:18:58.608460 env[1145]: time="2023-10-02T19:18:58.608441929Z" level=info msg="RemoveContainer for \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\"" Oct 2 19:18:58.610701 systemd[1]: Removed slice kubepods-besteffort-pod270089f2_e201_4dd4_ab8e_9f46ee7be306.slice. Oct 2 19:18:58.618404 env[1145]: time="2023-10-02T19:18:58.618378521Z" level=info msg="RemoveContainer for \"65aa08d1cb9f1bcb7b81c5187d554ac8f90d5bb8b608347474b8b66109f22076\" returns successfully" Oct 2 19:18:58.618654 kubelet[1563]: I1002 19:18:58.618636 1563 scope.go:115] "RemoveContainer" containerID="5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9" Oct 2 19:18:58.619274 env[1145]: time="2023-10-02T19:18:58.619261265Z" level=info msg="RemoveContainer for \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\"" Oct 2 19:18:58.631556 env[1145]: time="2023-10-02T19:18:58.631524397Z" level=info msg="RemoveContainer for \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\" returns successfully" Oct 2 19:18:58.631934 kubelet[1563]: I1002 19:18:58.631861 1563 scope.go:115] "RemoveContainer" containerID="5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9" Oct 2 19:18:58.632171 env[1145]: time="2023-10-02T19:18:58.632120460Z" level=error msg="ContainerStatus for \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\": not found" Oct 2 19:18:58.632346 kubelet[1563]: E1002 19:18:58.632328 1563 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\": not found" containerID="5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9" Oct 2 19:18:58.632403 kubelet[1563]: I1002 19:18:58.632379 1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9} err="failed to get container status \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5560852d61a98484650a5b2282ce1379a46cc0c2304e9a57c20cbff92fd748b9\": not found" Oct 2 19:18:58.958028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc-rootfs.mount: Deactivated successfully. Oct 2 19:18:58.958107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3553060a81529d114395382f72776e36b728be08c8ed594f70e2bdb9e487bccc-shm.mount: Deactivated successfully. Oct 2 19:18:58.958154 systemd[1]: var-lib-kubelet-pods-270089f2\x2de201\x2d4dd4\x2dab8e\x2d9f46ee7be306-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds56lk.mount: Deactivated successfully. Oct 2 19:18:58.958209 systemd[1]: var-lib-kubelet-pods-5094cd08\x2db6e2\x2d4fd3\x2d86f6\x2dcb0eef415303-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6bdq.mount: Deactivated successfully. Oct 2 19:18:58.958253 systemd[1]: var-lib-kubelet-pods-5094cd08\x2db6e2\x2d4fd3\x2d86f6\x2dcb0eef415303-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:18:59.279790 kubelet[1563]: I1002 19:18:59.279601 1563 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=270089f2-e201-4dd4-ab8e-9f46ee7be306 path="/var/lib/kubelet/pods/270089f2-e201-4dd4-ab8e-9f46ee7be306/volumes" Oct 2 19:18:59.280254 kubelet[1563]: I1002 19:18:59.280174 1563 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=5094cd08-b6e2-4fd3-86f6-cb0eef415303 path="/var/lib/kubelet/pods/5094cd08-b6e2-4fd3-86f6-cb0eef415303/volumes" Oct 2 19:18:59.353670 kubelet[1563]: E1002 19:18:59.353641 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"