Oct 2 19:37:37.673766 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:37:37.673789 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:37:37.673799 kernel: Disabled fast string operations Oct 2 19:37:37.673806 kernel: BIOS-provided physical RAM map: Oct 2 19:37:37.673813 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 2 19:37:37.673820 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 2 19:37:37.673831 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 2 19:37:37.673838 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 2 19:37:37.673845 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 2 19:37:37.673853 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 2 19:37:37.673860 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 2 19:37:37.673866 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 2 19:37:37.673873 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 2 19:37:37.673880 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 2 19:37:37.673890 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 2 19:37:37.673898 kernel: NX (Execute Disable) protection: active Oct 2 19:37:37.673906 kernel: SMBIOS 2.7 present. Oct 2 19:37:37.673914 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 2 19:37:37.673921 kernel: vmware: hypercall mode: 0x00 Oct 2 19:37:37.673929 kernel: Hypervisor detected: VMware Oct 2 19:37:37.673938 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 2 19:37:37.673945 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 2 19:37:37.673953 kernel: vmware: using clock offset of 3382772371 ns Oct 2 19:37:37.673961 kernel: tsc: Detected 3408.000 MHz processor Oct 2 19:37:37.673969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:37:37.673978 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:37:37.673986 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 2 19:37:37.673993 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:37:37.674001 kernel: total RAM covered: 3072M Oct 2 19:37:37.674011 kernel: Found optimal setting for mtrr clean up Oct 2 19:37:37.674020 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 2 19:37:37.674027 kernel: Using GB pages for direct mapping Oct 2 19:37:37.674035 kernel: ACPI: Early table checksum verification disabled Oct 2 19:37:37.674043 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 2 19:37:37.674051 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 2 19:37:37.674059 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 2 19:37:37.674067 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 2 19:37:37.674075 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 2 19:37:37.674083 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 2 19:37:37.674093 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 2 19:37:37.674104 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 2 19:37:37.674113 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 2 19:37:37.674121 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 2 19:37:37.674130 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 2 19:37:37.674140 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 2 19:37:37.674148 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 2 19:37:37.674157 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 2 19:37:37.674165 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 2 19:37:37.674174 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 2 19:37:37.674182 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 2 19:37:37.674191 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 2 19:37:37.674199 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 2 19:37:37.674217 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 2 19:37:37.674228 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 2 19:37:37.674237 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 2 19:37:37.674245 kernel: system APIC only can use physical flat Oct 2 19:37:37.674254 kernel: Setting APIC routing to physical flat. Oct 2 19:37:37.674262 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:37:37.674271 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 2 19:37:37.674279 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 2 19:37:37.674288 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 2 19:37:37.674296 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 2 19:37:37.674306 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 2 19:37:37.674314 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 2 19:37:37.674323 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 2 19:37:37.674331 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Oct 2 19:37:37.674340 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Oct 2 19:37:37.674348 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Oct 2 19:37:37.674356 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Oct 2 19:37:37.674364 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Oct 2 19:37:37.674372 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Oct 2 19:37:37.674380 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Oct 2 19:37:37.674390 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Oct 2 19:37:37.674398 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Oct 2 19:37:37.674407 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Oct 2 19:37:37.674415 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Oct 2 19:37:37.674423 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Oct 2 19:37:37.674431 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Oct 2 19:37:37.674438 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Oct 2 19:37:37.674446 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Oct 2 19:37:37.674454 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Oct 2 19:37:37.674461 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Oct 2 19:37:37.674472 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Oct 2 19:37:37.674481 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Oct 2 19:37:37.674489 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Oct 2 19:37:37.674498 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Oct 2 19:37:37.674506 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Oct 2 19:37:37.674514 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Oct 2 19:37:37.674521 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Oct 2 19:37:37.674529 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Oct 2 19:37:37.674535 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Oct 2 19:37:37.674542 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Oct 2 19:37:37.674551 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Oct 2 19:37:37.674557 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Oct 2 19:37:37.674565 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Oct 2 19:37:37.674572 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Oct 2 19:37:37.674579 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Oct 2 19:37:37.674587 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Oct 2 19:37:37.674594 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Oct 2 19:37:37.674601 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Oct 2 19:37:37.674610 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Oct 2 19:37:37.674618 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Oct 2 19:37:37.674629 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Oct 2 19:37:37.674637 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Oct 2 19:37:37.674652 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Oct 2 19:37:37.674661 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Oct 2 19:37:37.674670 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Oct 2 19:37:37.674678 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Oct 2 19:37:37.674685 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Oct 2 19:37:37.674694 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Oct 2 19:37:37.674702 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Oct 2 19:37:37.674710 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Oct 2 19:37:37.674720 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Oct 2 19:37:37.674729 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Oct 2 19:37:37.674737 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Oct 2 19:37:37.674746 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Oct 2 19:37:37.674754 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Oct 2 19:37:37.674763 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Oct 2 19:37:37.674779 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Oct 2 19:37:37.674788 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Oct 2 19:37:37.674797 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Oct 2 19:37:37.674805 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Oct 2 19:37:37.674815 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Oct 2 19:37:37.674826 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Oct 2 19:37:37.674835 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Oct 2 19:37:37.674843 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Oct 2 19:37:37.674852 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Oct 2 19:37:37.674861 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Oct 2 19:37:37.674870 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Oct 2 19:37:37.674879 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Oct 2 19:37:37.674889 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Oct 2 19:37:37.674898 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Oct 2 19:37:37.674906 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Oct 2 19:37:37.674915 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Oct 2 19:37:37.674923 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Oct 2 19:37:37.674932 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Oct 2 19:37:37.674941 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Oct 2 19:37:37.674950 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Oct 2 19:37:37.674958 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Oct 2 19:37:37.674966 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Oct 2 19:37:37.674976 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Oct 2 19:37:37.674983 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Oct 2 19:37:37.674992 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Oct 2 19:37:37.675000 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Oct 2 19:37:37.675008 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Oct 2 19:37:37.675017 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Oct 2 19:37:37.675026 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Oct 2 19:37:37.675035 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Oct 2 19:37:37.675043 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Oct 2 19:37:37.675053 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Oct 2 19:37:37.675062 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Oct 2 19:37:37.675070 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Oct 2 19:37:37.675079 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Oct 2 19:37:37.675089 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Oct 2 19:37:37.675097 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Oct 2 19:37:37.675105 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Oct 2 19:37:37.675114 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Oct 2 19:37:37.675123 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Oct 2 19:37:37.675131 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Oct 2 19:37:37.675142 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Oct 2 19:37:37.675150 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Oct 2 19:37:37.675159 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Oct 2 19:37:37.675168 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Oct 2 19:37:37.675177 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Oct 2 19:37:37.675185 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Oct 2 19:37:37.675194 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Oct 2 19:37:37.675203 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Oct 2 19:37:37.675219 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Oct 2 19:37:37.675228 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Oct 2 19:37:37.675239 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Oct 2 19:37:37.675247 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Oct 2 19:37:37.675256 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Oct 2 19:37:37.675266 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Oct 2 19:37:37.675275 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Oct 2 19:37:37.675284 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Oct 2 19:37:37.675293 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Oct 2 19:37:37.675301 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Oct 2 19:37:37.675310 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Oct 2 19:37:37.675319 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Oct 2 19:37:37.675329 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Oct 2 19:37:37.675338 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Oct 2 19:37:37.675347 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Oct 2 19:37:37.675355 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Oct 2 19:37:37.675364 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Oct 2 19:37:37.675373 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Oct 2 19:37:37.675382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 2 19:37:37.675391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 2 19:37:37.675401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 2 19:37:37.675410 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Oct 2 19:37:37.675421 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Oct 2 19:37:37.675431 kernel: Zone ranges: Oct 2 19:37:37.675440 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:37:37.675450 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 2 19:37:37.675459 kernel: Normal empty Oct 2 19:37:37.675468 kernel: Movable zone start for each node Oct 2 19:37:37.675478 kernel: Early memory node ranges Oct 2 19:37:37.675487 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 2 19:37:37.675495 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 2 19:37:37.675507 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 2 19:37:37.675517 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 2 19:37:37.675526 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:37:37.675534 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 2 19:37:37.675543 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 2 19:37:37.675552 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 2 19:37:37.675561 kernel: system APIC only can use physical flat Oct 2 19:37:37.675570 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 2 19:37:37.675580 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 2 19:37:37.675591 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 2 19:37:37.675600 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 2 19:37:37.678045 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 2 19:37:37.678058 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 2 19:37:37.678068 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 2 19:37:37.678077 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 2 19:37:37.678087 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 2 19:37:37.678096 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 2 19:37:37.678105 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 2 19:37:37.678114 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 2 19:37:37.678127 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 2 19:37:37.678137 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 2 19:37:37.678146 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 2 19:37:37.678155 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 2 19:37:37.678164 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 2 19:37:37.678173 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 2 19:37:37.678182 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 2 19:37:37.678191 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 2 19:37:37.678200 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 2 19:37:37.678230 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 2 19:37:37.678239 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 2 19:37:37.678248 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 2 19:37:37.678257 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 2 19:37:37.678266 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 2 19:37:37.678275 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 2 19:37:37.678284 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 2 19:37:37.678293 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 2 19:37:37.678302 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 2 19:37:37.678311 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 2 19:37:37.678322 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 2 19:37:37.678332 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 2 19:37:37.678340 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 2 19:37:37.678350 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 2 19:37:37.678359 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 2 19:37:37.678368 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 2 19:37:37.678377 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 2 19:37:37.678386 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 2 19:37:37.678396 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 2 19:37:37.678406 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 2 19:37:37.678416 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 2 19:37:37.678425 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 2 19:37:37.678434 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 2 19:37:37.678443 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 2 19:37:37.678452 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 2 19:37:37.678460 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 2 19:37:37.678470 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 2 19:37:37.678479 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 2 19:37:37.678488 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 2 19:37:37.678499 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 2 19:37:37.678509 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 2 19:37:37.678518 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 2 19:37:37.678526 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 2 19:37:37.678535 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 2 19:37:37.678544 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 2 19:37:37.678552 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 2 19:37:37.678561 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 2 19:37:37.678571 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 2 19:37:37.678582 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 2 19:37:37.678591 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 2 19:37:37.678600 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 2 19:37:37.678946 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 2 19:37:37.678961 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 2 19:37:37.678970 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 2 19:37:37.678979 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 2 19:37:37.678989 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 2 19:37:37.678998 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 2 19:37:37.679007 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 2 19:37:37.679018 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 2 19:37:37.679027 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 2 19:37:37.679036 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 2 19:37:37.679045 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 2 19:37:37.679053 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 2 19:37:37.679062 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 2 19:37:37.679071 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 2 19:37:37.679080 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 2 19:37:37.679089 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 2 19:37:37.679101 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 2 19:37:37.679109 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 2 19:37:37.679118 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 2 19:37:37.679127 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 2 19:37:37.679136 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 2 19:37:37.679145 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 2 19:37:37.679154 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 2 19:37:37.679163 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 2 19:37:37.679172 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 2 19:37:37.679183 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 2 19:37:37.679192 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 2 19:37:37.679201 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 2 19:37:37.679217 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 2 19:37:37.679227 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 2 19:37:37.679236 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 2 19:37:37.679245 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 2 19:37:37.679255 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 2 19:37:37.679264 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 2 19:37:37.679273 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 2 19:37:37.679283 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 2 19:37:37.679292 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 2 19:37:37.679300 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 2 19:37:37.679308 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 2 19:37:37.679317 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 2 19:37:37.679327 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 2 19:37:37.679336 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 2 19:37:37.679345 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 2 19:37:37.679354 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 2 19:37:37.679365 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 2 19:37:37.679374 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 2 19:37:37.679383 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 2 19:37:37.679392 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 2 19:37:37.679401 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 2 19:37:37.679410 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 2 19:37:37.679420 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 2 19:37:37.679429 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 2 19:37:37.679438 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 2 19:37:37.679447 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 2 19:37:37.679457 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 2 19:37:37.679467 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 2 19:37:37.679476 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 2 19:37:37.679484 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 2 19:37:37.679493 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 2 19:37:37.679502 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 2 19:37:37.679511 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 2 19:37:37.679520 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 2 19:37:37.679528 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 2 19:37:37.679539 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 2 19:37:37.679547 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 2 19:37:37.679556 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 2 19:37:37.679565 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:37:37.679574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 2 19:37:37.679583 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:37:37.679592 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 2 19:37:37.679602 kernel: TSC deadline timer available Oct 2 19:37:37.679611 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Oct 2 19:37:37.679622 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 2 19:37:37.679632 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 2 19:37:37.679641 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:37:37.679662 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Oct 2 19:37:37.679672 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Oct 2 19:37:37.679681 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Oct 2 19:37:37.679690 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 2 19:37:37.679700 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 2 19:37:37.679709 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 2 19:37:37.679719 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 2 19:37:37.679728 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 2 19:37:37.679737 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 2 19:37:37.679746 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 2 19:37:37.679765 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 2 19:37:37.679776 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 2 19:37:37.679785 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 2 19:37:37.679795 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 2 19:37:37.679804 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 2 19:37:37.679816 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 2 19:37:37.679825 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 2 19:37:37.679834 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 2 19:37:37.679844 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 2 19:37:37.679855 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Oct 2 19:37:37.679865 kernel: Policy zone: DMA32 Oct 2 19:37:37.679877 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:37:37.679887 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:37:37.679898 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 2 19:37:37.679907 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 2 19:37:37.679917 kernel: printk: log_buf_len min size: 262144 bytes Oct 2 19:37:37.679926 kernel: printk: log_buf_len: 1048576 bytes Oct 2 19:37:37.679936 kernel: printk: early log buf free: 239728(91%) Oct 2 19:37:37.679946 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:37:37.679956 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:37:37.679966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:37:37.679976 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 153416K reserved, 0K cma-reserved) Oct 2 19:37:37.679987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 2 19:37:37.679997 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:37:37.680007 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:37:37.680018 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:37:37.680029 kernel: rcu: RCU event tracing is enabled. Oct 2 19:37:37.680040 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 2 19:37:37.680050 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:37:37.680060 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:37:37.680070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:37:37.680080 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 2 19:37:37.680089 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 2 19:37:37.680099 kernel: random: crng init done Oct 2 19:37:37.680108 kernel: Console: colour VGA+ 80x25 Oct 2 19:37:37.680117 kernel: printk: console [tty0] enabled Oct 2 19:37:37.680127 kernel: printk: console [ttyS0] enabled Oct 2 19:37:37.680139 kernel: ACPI: Core revision 20210730 Oct 2 19:37:37.680149 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 2 19:37:37.680159 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:37:37.680169 kernel: x2apic enabled Oct 2 19:37:37.680179 kernel: Switched APIC routing to physical x2apic. Oct 2 19:37:37.680188 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:37:37.680198 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 2 19:37:37.680216 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 2 19:37:37.680228 kernel: Disabled fast string operations Oct 2 19:37:37.680239 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 19:37:37.680248 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 19:37:37.680258 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:37:37.680268 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Oct 2 19:37:37.680278 kernel: Spectre V2 : Mitigation: Enhanced IBRS Oct 2 19:37:37.680288 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:37:37.680298 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 2 19:37:37.680308 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 2 19:37:37.680319 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:37:37.680329 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:37:37.680339 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:37:37.680349 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 2 19:37:37.680358 kernel: GDS: Unknown: Dependent on hypervisor status Oct 2 19:37:37.680368 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:37:37.680378 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:37:37.680388 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:37:37.680397 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:37:37.680409 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 2 19:37:37.680419 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:37:37.680429 kernel: pid_max: default: 131072 minimum: 1024 Oct 2 19:37:37.680438 kernel: LSM: Security Framework initializing Oct 2 19:37:37.680448 kernel: SELinux: Initializing. Oct 2 19:37:37.680458 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:37:37.680468 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:37:37.680478 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 2 19:37:37.680487 kernel: Performance Events: Skylake events, core PMU driver. Oct 2 19:37:37.680499 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 2 19:37:37.680509 kernel: core: CPUID marked event: 'instructions' unavailable Oct 2 19:37:37.680519 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 2 19:37:37.680528 kernel: core: CPUID marked event: 'cache references' unavailable Oct 2 19:37:37.680539 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 2 19:37:37.680548 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 2 19:37:37.680558 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 2 19:37:37.680568 kernel: ... version: 1 Oct 2 19:37:37.680578 kernel: ... bit width: 48 Oct 2 19:37:37.680589 kernel: ... generic registers: 4 Oct 2 19:37:37.680598 kernel: ... value mask: 0000ffffffffffff Oct 2 19:37:37.680608 kernel: ... max period: 000000007fffffff Oct 2 19:37:37.680617 kernel: ... fixed-purpose events: 0 Oct 2 19:37:37.680627 kernel: ... event mask: 000000000000000f Oct 2 19:37:37.680636 kernel: signal: max sigframe size: 1776 Oct 2 19:37:37.680646 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:37:37.680655 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:37:37.680665 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:37:37.680677 kernel: x86: Booting SMP configuration: Oct 2 19:37:37.680687 kernel: .... node #0, CPUs: #1 Oct 2 19:37:37.680695 kernel: Disabled fast string operations Oct 2 19:37:37.680705 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Oct 2 19:37:37.680715 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 2 19:37:37.680723 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:37:37.680733 kernel: smpboot: Max logical packages: 128 Oct 2 19:37:37.680743 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 2 19:37:37.680753 kernel: devtmpfs: initialized Oct 2 19:37:37.680763 kernel: x86/mm: Memory block size: 128MB Oct 2 19:37:37.680774 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 2 19:37:37.680783 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:37:37.680793 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 2 19:37:37.680803 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:37:37.680812 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:37:37.680822 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:37:37.680832 kernel: audit: type=2000 audit(1696275456.057:1): state=initialized audit_enabled=0 res=1 Oct 2 19:37:37.680842 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:37:37.680851 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:37:37.680863 kernel: cpuidle: using governor menu Oct 2 19:37:37.680872 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 2 19:37:37.680881 kernel: ACPI: bus type PCI registered Oct 2 19:37:37.680891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:37:37.680901 kernel: dca service started, version 1.12.1 Oct 2 19:37:37.680911 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Oct 2 19:37:37.680921 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Oct 2 19:37:37.680930 kernel: PCI: Using configuration type 1 for base access Oct 2 19:37:37.680941 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:37:37.680952 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:37:37.680962 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:37:37.680972 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:37:37.680982 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:37:37.680991 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:37:37.681001 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:37:37.681011 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:37:37.681021 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:37:37.681030 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:37:37.681042 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:37:37.681052 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 2 19:37:37.681062 kernel: ACPI: Interpreter enabled Oct 2 19:37:37.686250 kernel: ACPI: PM: (supports S0 S1 S5) Oct 2 19:37:37.686271 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:37:37.686281 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:37:37.686291 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 2 19:37:37.686301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 2 19:37:37.686438 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:37:37.686520 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 2 19:37:37.686591 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 2 19:37:37.686605 kernel: PCI host bridge to bus 0000:00 Oct 2 19:37:37.686677 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:37:37.686817 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Oct 2 19:37:37.686888 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Oct 2 19:37:37.686957 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Oct 2 19:37:37.687022 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Oct 2 19:37:37.687133 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 2 19:37:37.687202 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:37:37.687270 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 2 19:37:37.687311 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 2 19:37:37.687365 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Oct 2 19:37:37.687420 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Oct 2 19:37:37.687469 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Oct 2 19:37:37.687522 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Oct 2 19:37:37.687570 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Oct 2 19:37:37.687614 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:37:37.687665 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:37:37.687713 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:37:37.687757 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:37:37.687805 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Oct 2 19:37:37.687849 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 2 19:37:37.687894 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 2 19:37:37.687942 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Oct 2 19:37:37.687987 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Oct 2 19:37:37.688034 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Oct 2 19:37:37.688084 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Oct 2 19:37:37.688129 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Oct 2 19:37:37.688177 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Oct 2 19:37:37.688228 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Oct 2 19:37:37.688272 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Oct 2 19:37:37.688316 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:37:37.688367 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Oct 2 19:37:37.688419 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.688465 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.688514 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.688559 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.688606 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.688655 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.688853 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.688907 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.688957 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689003 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689052 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689101 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689150 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689196 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689261 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689308 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689357 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689405 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689454 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689500 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689549 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689593 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689647 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689694 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689743 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689790 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689838 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689883 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.689932 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.689981 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.690030 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.690076 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.690124 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.690170 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.690227 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.690278 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.693327 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.693399 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.693451 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.693499 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.693551 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.693602 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.693659 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.693705 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.693754 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.693799 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.693849 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.693896 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.693946 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.693991 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694041 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.694088 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694137 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.694182 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694255 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.694304 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694355 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.694400 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694451 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.694497 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694549 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.694594 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694650 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Oct 2 19:37:37.694724 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.694776 kernel: pci_bus 0000:01: extended config space not accessible Oct 2 19:37:37.694824 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 19:37:37.694873 kernel: pci_bus 0000:02: extended config space not accessible Oct 2 19:37:37.694882 kernel: acpiphp: Slot [32] registered Oct 2 19:37:37.694889 kernel: acpiphp: Slot [33] registered Oct 2 19:37:37.694895 kernel: acpiphp: Slot [34] registered Oct 2 19:37:37.694900 kernel: acpiphp: Slot [35] registered Oct 2 19:37:37.694906 kernel: acpiphp: Slot [36] registered Oct 2 19:37:37.694912 kernel: acpiphp: Slot [37] registered Oct 2 19:37:37.694917 kernel: acpiphp: Slot [38] registered Oct 2 19:37:37.694923 kernel: acpiphp: Slot [39] registered Oct 2 19:37:37.694930 kernel: acpiphp: Slot [40] registered Oct 2 19:37:37.694936 kernel: acpiphp: Slot [41] registered Oct 2 19:37:37.694941 kernel: acpiphp: Slot [42] registered Oct 2 19:37:37.694947 kernel: acpiphp: Slot [43] registered Oct 2 19:37:37.694952 kernel: acpiphp: Slot [44] registered Oct 2 19:37:37.694958 kernel: acpiphp: Slot [45] registered Oct 2 19:37:37.694964 kernel: acpiphp: Slot [46] registered Oct 2 19:37:37.694969 kernel: acpiphp: Slot [47] registered Oct 2 19:37:37.694975 kernel: acpiphp: Slot [48] registered Oct 2 19:37:37.694982 kernel: acpiphp: Slot [49] registered Oct 2 19:37:37.694988 kernel: acpiphp: Slot [50] registered Oct 2 19:37:37.694994 kernel: acpiphp: Slot [51] registered Oct 2 19:37:37.695000 kernel: acpiphp: Slot [52] registered Oct 2 19:37:37.695005 kernel: acpiphp: Slot [53] registered Oct 2 19:37:37.695011 kernel: acpiphp: Slot [54] registered Oct 2 19:37:37.695016 kernel: acpiphp: Slot [55] registered Oct 2 19:37:37.695022 kernel: acpiphp: Slot [56] registered Oct 2 19:37:37.695028 kernel: acpiphp: Slot [57] registered Oct 2 19:37:37.695033 kernel: acpiphp: Slot [58] registered Oct 2 19:37:37.695040 kernel: acpiphp: Slot [59] registered Oct 2 19:37:37.695046 kernel: acpiphp: Slot [60] registered Oct 2 19:37:37.695051 kernel: acpiphp: Slot [61] registered Oct 2 19:37:37.695057 kernel: acpiphp: Slot [62] registered Oct 2 19:37:37.695062 kernel: acpiphp: Slot [63] registered Oct 2 19:37:37.695108 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 2 19:37:37.695163 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 2 19:37:37.695258 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 2 19:37:37.695315 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 2 19:37:37.695358 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 2 19:37:37.695403 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Oct 2 19:37:37.695446 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Oct 2 19:37:37.695489 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Oct 2 19:37:37.695534 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Oct 2 19:37:37.695578 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 2 19:37:37.695621 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 2 19:37:37.695668 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 2 19:37:37.695719 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Oct 2 19:37:37.695765 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Oct 2 19:37:37.695811 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 2 19:37:37.695857 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 2 19:37:37.695902 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 2 19:37:37.695948 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 2 19:37:37.695996 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 2 19:37:37.696040 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 2 19:37:37.696085 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 2 19:37:37.696132 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 2 19:37:37.696175 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 2 19:37:37.696233 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 2 19:37:37.696279 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 2 19:37:37.696326 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 2 19:37:37.696374 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 2 19:37:37.696418 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 2 19:37:37.697514 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 2 19:37:37.697591 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 2 19:37:37.697666 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 2 19:37:37.697724 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 2 19:37:37.697777 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 2 19:37:37.697822 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 2 19:37:37.697868 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 2 19:37:37.697914 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 2 19:37:37.697959 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 2 19:37:37.698002 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 2 19:37:37.698051 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 2 19:37:37.698095 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 2 19:37:37.698140 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 2 19:37:37.698186 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 2 19:37:37.698254 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 2 19:37:37.698301 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 2 19:37:37.698357 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Oct 2 19:37:37.698408 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Oct 2 19:37:37.698454 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Oct 2 19:37:37.698500 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Oct 2 19:37:37.698545 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Oct 2 19:37:37.698590 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 2 19:37:37.698635 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 2 19:37:37.698681 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:37:37.698727 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 2 19:37:37.698789 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 2 19:37:37.699156 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 2 19:37:37.699297 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 2 19:37:37.699350 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 2 19:37:37.699396 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 2 19:37:37.699440 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 2 19:37:37.710442 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 2 19:37:37.710520 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 2 19:37:37.710578 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 2 19:37:37.710624 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 2 19:37:37.710680 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 2 19:37:37.710730 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 2 19:37:37.710775 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 2 19:37:37.710820 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 2 19:37:37.710867 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 2 19:37:37.710919 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 2 19:37:37.710965 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 2 19:37:37.711013 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 2 19:37:37.711056 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 2 19:37:37.711099 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 2 19:37:37.711145 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 2 19:37:37.711189 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 2 19:37:37.711242 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 2 19:37:37.711291 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 2 19:37:37.711336 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 2 19:37:37.711382 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 2 19:37:37.711429 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 2 19:37:37.711484 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 2 19:37:37.711542 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 2 19:37:37.711586 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 2 19:37:37.711633 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 2 19:37:37.711693 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 2 19:37:37.711738 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 2 19:37:37.711800 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 2 19:37:37.711852 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 2 19:37:37.711897 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 2 19:37:37.711942 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 2 19:37:37.713310 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 2 19:37:37.713380 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 2 19:37:37.713433 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 2 19:37:37.713479 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 2 19:37:37.713528 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 2 19:37:37.713574 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 2 19:37:37.713618 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 2 19:37:37.713665 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 2 19:37:37.713709 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 2 19:37:37.713759 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 2 19:37:37.713811 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 2 19:37:37.713856 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 2 19:37:37.713900 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 2 19:37:37.713949 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 2 19:37:37.714017 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 2 19:37:37.714063 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 2 19:37:37.714111 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 2 19:37:37.714156 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 2 19:37:37.714202 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 2 19:37:37.714278 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 2 19:37:37.714352 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 2 19:37:37.714905 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 2 19:37:37.714959 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 2 19:37:37.715007 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 2 19:37:37.715055 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 2 19:37:37.715104 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 2 19:37:37.715152 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 2 19:37:37.715201 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 2 19:37:37.715874 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 2 19:37:37.715925 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 2 19:37:37.715975 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 2 19:37:37.716020 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 2 19:37:37.716064 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 2 19:37:37.717269 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 2 19:37:37.717331 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 2 19:37:37.717379 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 2 19:37:37.717427 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 2 19:37:37.717473 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 2 19:37:37.717516 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 2 19:37:37.717563 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 2 19:37:37.717607 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 2 19:37:37.717655 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 2 19:37:37.717664 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 2 19:37:37.717670 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 2 19:37:37.717675 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 2 19:37:37.717681 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:37:37.717687 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 2 19:37:37.717693 kernel: iommu: Default domain type: Translated Oct 2 19:37:37.717699 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:37:37.717743 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 2 19:37:37.717790 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:37:37.717834 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 2 19:37:37.717844 kernel: vgaarb: loaded Oct 2 19:37:37.717850 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:37:37.717856 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:37:37.717862 kernel: PTP clock support registered Oct 2 19:37:37.717867 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:37:37.717873 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:37:37.717880 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 2 19:37:37.717888 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 2 19:37:37.717893 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 2 19:37:37.717899 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 2 19:37:37.717905 kernel: clocksource: Switched to clocksource tsc-early Oct 2 19:37:37.717911 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:37:37.717917 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:37:37.717923 kernel: pnp: PnP ACPI init Oct 2 19:37:37.717973 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 2 19:37:37.718016 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 2 19:37:37.718056 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 2 19:37:37.718100 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 2 19:37:37.718145 kernel: pnp 00:06: [dma 2] Oct 2 19:37:37.718193 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 2 19:37:37.718963 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 2 19:37:37.719013 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 2 19:37:37.719024 kernel: pnp: PnP ACPI: found 8 devices Oct 2 19:37:37.719031 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:37:37.719037 kernel: NET: Registered PF_INET protocol family Oct 2 19:37:37.719043 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:37:37.719049 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 2 19:37:37.719055 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:37:37.719060 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:37:37.719066 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 19:37:37.719073 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 2 19:37:37.719078 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:37:37.719084 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:37:37.719090 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:37:37.719096 kernel: NET: Registered PF_XDP protocol family Oct 2 19:37:37.719148 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 2 19:37:37.719197 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 2 19:37:37.720312 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 2 19:37:37.720372 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 2 19:37:37.720424 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 2 19:37:37.720472 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 2 19:37:37.720520 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 2 19:37:37.720567 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 2 19:37:37.720614 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 2 19:37:37.720663 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 2 19:37:37.720710 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 2 19:37:37.720757 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 2 19:37:37.720804 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 2 19:37:37.720851 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 2 19:37:37.720898 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 2 19:37:37.720946 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 2 19:37:37.721284 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 2 19:37:37.721344 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 2 19:37:37.721396 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 2 19:37:37.721454 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 2 19:37:37.721514 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 2 19:37:37.721585 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 2 19:37:37.721634 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 2 19:37:37.721690 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Oct 2 19:37:37.721746 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Oct 2 19:37:37.721793 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.721848 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.722754 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.722941 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.722997 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.723046 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.723093 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.723139 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.723184 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.723716 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.723776 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.723825 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.723872 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.723919 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.723966 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.724011 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.724057 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.724101 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.724151 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.724194 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.725364 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.725418 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.725468 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.725536 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.725795 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.725845 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.725895 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.726870 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.726927 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.726976 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.727025 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.727071 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.727118 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.727163 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728120 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728190 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728301 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728353 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728406 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728456 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728508 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728562 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728614 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728660 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728705 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728749 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728793 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728836 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728879 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.728924 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.728970 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729014 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729062 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729105 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729152 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729195 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729250 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729294 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729340 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729385 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729430 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729478 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729524 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729569 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729615 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729659 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729705 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.729749 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.729795 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.731970 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732036 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.732090 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732139 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.732184 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732257 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.732306 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732354 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.732400 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732447 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.732493 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732544 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.732589 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732635 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 2 19:37:37.732681 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 2 19:37:37.732728 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 2 19:37:37.732775 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 2 19:37:37.732820 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 2 19:37:37.732864 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 2 19:37:37.732908 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 2 19:37:37.732963 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Oct 2 19:37:37.733010 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 2 19:37:37.733055 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 2 19:37:37.733099 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 2 19:37:37.733145 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 2 19:37:37.733190 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 2 19:37:37.734615 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 2 19:37:37.734708 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 2 19:37:37.734760 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 2 19:37:37.734825 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 2 19:37:37.734889 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 2 19:37:37.734940 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 2 19:37:37.734995 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 2 19:37:37.735046 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 2 19:37:37.735098 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 2 19:37:37.735152 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 2 19:37:37.735205 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 2 19:37:37.737311 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 2 19:37:37.737374 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 2 19:37:37.737425 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 2 19:37:37.737482 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 2 19:37:37.737529 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 2 19:37:37.737578 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 2 19:37:37.737622 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 2 19:37:37.737672 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 2 19:37:37.737720 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 2 19:37:37.737766 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 2 19:37:37.737809 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 2 19:37:37.737860 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Oct 2 19:37:37.737907 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 2 19:37:37.737951 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 2 19:37:37.737996 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 2 19:37:37.738040 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 2 19:37:37.738090 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 2 19:37:37.738136 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 2 19:37:37.738181 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 2 19:37:37.739080 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 2 19:37:37.739136 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 2 19:37:37.739183 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 2 19:37:37.739265 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 2 19:37:37.739312 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 2 19:37:37.739357 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 2 19:37:37.739511 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 2 19:37:37.739563 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 2 19:37:37.739612 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 2 19:37:37.739657 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 2 19:37:37.739702 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 2 19:37:37.739748 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 2 19:37:37.739791 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 2 19:37:37.739834 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 2 19:37:37.739880 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 2 19:37:37.739925 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 2 19:37:37.739972 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 2 19:37:37.740018 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 2 19:37:37.740062 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 2 19:37:37.740109 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 2 19:37:37.740155 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 2 19:37:37.740200 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 2 19:37:37.741463 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 2 19:37:37.741516 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 2 19:37:37.741565 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 2 19:37:37.741610 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 2 19:37:37.741659 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 2 19:37:37.741704 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 2 19:37:37.741750 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 2 19:37:37.741795 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 2 19:37:37.741839 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 2 19:37:37.741883 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 2 19:37:37.741928 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 2 19:37:37.741972 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 2 19:37:37.742016 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 2 19:37:37.742065 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 2 19:37:37.743328 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 2 19:37:37.743383 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 2 19:37:37.744033 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 2 19:37:37.744088 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 2 19:37:37.744136 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 2 19:37:37.744184 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 2 19:37:37.744253 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 2 19:37:37.744301 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 2 19:37:37.744347 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 2 19:37:37.744397 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 2 19:37:37.744451 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 2 19:37:37.744499 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 2 19:37:37.744543 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 2 19:37:37.744588 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 2 19:37:37.744632 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 2 19:37:37.744702 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 2 19:37:37.744747 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 2 19:37:37.744792 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 2 19:37:37.744840 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 2 19:37:37.744885 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 2 19:37:37.744931 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 2 19:37:37.744976 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 2 19:37:37.746233 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 2 19:37:37.746316 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 2 19:37:37.746368 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 2 19:37:37.746418 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 2 19:37:37.746480 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 2 19:37:37.746543 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 2 19:37:37.746602 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 2 19:37:37.746650 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 2 19:37:37.746695 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 2 19:37:37.746742 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 2 19:37:37.746786 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 2 19:37:37.746829 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 2 19:37:37.746875 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 2 19:37:37.746920 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 2 19:37:37.746964 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 2 19:37:37.747016 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 2 19:37:37.747057 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Oct 2 19:37:37.747097 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Oct 2 19:37:37.747136 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Oct 2 19:37:37.747174 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Oct 2 19:37:37.747359 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Oct 2 19:37:37.747405 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Oct 2 19:37:37.747449 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Oct 2 19:37:37.747494 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 2 19:37:37.747536 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 2 19:37:37.747576 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 2 19:37:37.747632 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 2 19:37:37.747686 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Oct 2 19:37:37.747729 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Oct 2 19:37:37.747773 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Oct 2 19:37:37.747814 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Oct 2 19:37:37.747855 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Oct 2 19:37:37.747896 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Oct 2 19:37:37.747936 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Oct 2 19:37:37.747992 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 2 19:37:37.748056 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 2 19:37:37.748099 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 2 19:37:37.748149 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 2 19:37:37.748191 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 2 19:37:37.748255 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 2 19:37:37.748305 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 2 19:37:37.748347 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 2 19:37:37.748389 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 2 19:37:37.748436 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 2 19:37:37.748481 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 2 19:37:37.748530 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 2 19:37:37.748572 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 2 19:37:37.748623 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 2 19:37:37.748675 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 2 19:37:37.748725 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 2 19:37:37.748769 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 2 19:37:37.748818 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 2 19:37:37.748860 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 2 19:37:37.748908 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 2 19:37:37.748949 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 2 19:37:37.748993 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 2 19:37:37.749041 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 2 19:37:37.749082 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 2 19:37:37.749123 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 2 19:37:37.749169 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 2 19:37:37.749255 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 2 19:37:37.749300 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 2 19:37:37.749354 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 2 19:37:37.749397 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 2 19:37:37.749455 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 2 19:37:37.749500 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 2 19:37:37.749546 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 2 19:37:37.749588 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 2 19:37:37.749639 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 2 19:37:37.749681 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 2 19:37:37.749727 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 2 19:37:37.749770 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 2 19:37:37.749816 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 2 19:37:37.749858 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 2 19:37:37.749900 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 2 19:37:37.749947 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 2 19:37:37.749988 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 2 19:37:37.750029 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 2 19:37:37.750091 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 2 19:37:37.750134 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 2 19:37:37.750176 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 2 19:37:37.750283 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 2 19:37:37.750329 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 2 19:37:37.750375 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 2 19:37:37.750416 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 2 19:37:37.750462 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 2 19:37:37.750504 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 2 19:37:37.750555 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 2 19:37:37.750603 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 2 19:37:37.750658 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 2 19:37:37.750700 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 2 19:37:37.750750 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 2 19:37:37.750793 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 2 19:37:37.750835 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 2 19:37:37.750881 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 2 19:37:37.750936 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 2 19:37:37.750997 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 2 19:37:37.751064 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 2 19:37:37.751108 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 2 19:37:37.751224 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 2 19:37:37.751301 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 2 19:37:37.751359 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 2 19:37:37.751409 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 2 19:37:37.751466 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 2 19:37:37.751514 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 2 19:37:37.751569 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 2 19:37:37.753917 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 2 19:37:37.753987 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 2 19:37:37.754037 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 2 19:37:37.754095 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:37:37.754105 kernel: PCI: CLS 32 bytes, default 64 Oct 2 19:37:37.754112 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:37:37.754119 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 2 19:37:37.754128 kernel: clocksource: Switched to clocksource tsc Oct 2 19:37:37.754134 kernel: Initialise system trusted keyrings Oct 2 19:37:37.754141 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 2 19:37:37.754147 kernel: Key type asymmetric registered Oct 2 19:37:37.754153 kernel: Asymmetric key parser 'x509' registered Oct 2 19:37:37.754159 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:37:37.754165 kernel: io scheduler mq-deadline registered Oct 2 19:37:37.754171 kernel: io scheduler kyber registered Oct 2 19:37:37.754177 kernel: io scheduler bfq registered Oct 2 19:37:37.754244 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 2 19:37:37.754297 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.754351 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 2 19:37:37.754403 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.754457 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 2 19:37:37.754508 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.754562 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 2 19:37:37.754613 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.754699 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 2 19:37:37.754753 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.754806 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 2 19:37:37.754857 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.754910 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 2 19:37:37.754964 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755017 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 2 19:37:37.755066 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755118 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 2 19:37:37.755169 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755232 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 2 19:37:37.755289 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755341 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 2 19:37:37.755392 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755443 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 2 19:37:37.755493 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755544 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 2 19:37:37.755598 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755650 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 2 19:37:37.755700 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755752 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 2 19:37:37.755805 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755861 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 2 19:37:37.755911 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.755965 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 2 19:37:37.756017 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756068 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 2 19:37:37.756119 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756173 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 2 19:37:37.756235 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756292 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 2 19:37:37.756356 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756463 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 2 19:37:37.756515 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756564 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 2 19:37:37.756617 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756666 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 2 19:37:37.756718 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756765 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 2 19:37:37.756812 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756863 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 2 19:37:37.756909 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.756957 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 2 19:37:37.757001 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.757049 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 2 19:37:37.758292 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.758372 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 2 19:37:37.758435 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.758494 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 2 19:37:37.758545 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.758597 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 2 19:37:37.758648 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.758702 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 2 19:37:37.758752 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.758805 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 2 19:37:37.758855 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 2 19:37:37.758864 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:37:37.758873 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:37:37.758879 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:37:37.758885 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 2 19:37:37.758892 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:37:37.758898 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:37:37.758951 kernel: rtc_cmos 00:01: registered as rtc0 Oct 2 19:37:37.758999 kernel: rtc_cmos 00:01: setting system clock to 2023-10-02T19:37:37 UTC (1696275457) Oct 2 19:37:37.759046 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 2 19:37:37.759057 kernel: fail to initialize ptp_kvm Oct 2 19:37:37.759065 kernel: intel_pstate: CPU model not supported Oct 2 19:37:37.759071 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:37:37.759077 kernel: Segment Routing with IPv6 Oct 2 19:37:37.759084 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:37:37.759090 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:37:37.759096 kernel: Key type dns_resolver registered Oct 2 19:37:37.759102 kernel: IPI shorthand broadcast: enabled Oct 2 19:37:37.759109 kernel: sched_clock: Marking stable (859177635, 224988921)->(1157543025, -73376469) Oct 2 19:37:37.759117 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:37:37.759123 kernel: registered taskstats version 1 Oct 2 19:37:37.759130 kernel: Loading compiled-in X.509 certificates Oct 2 19:37:37.759136 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:37:37.759142 kernel: Key type .fscrypt registered Oct 2 19:37:37.759148 kernel: Key type fscrypt-provisioning registered Oct 2 19:37:37.759154 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:37:37.759160 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:37:37.759168 kernel: ima: No architecture policies found Oct 2 19:37:37.759175 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:37:37.759181 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:37:37.759188 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:37:37.759194 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:37:37.759200 kernel: Run /init as init process Oct 2 19:37:37.759213 kernel: with arguments: Oct 2 19:37:37.759219 kernel: /init Oct 2 19:37:37.759225 kernel: with environment: Oct 2 19:37:37.759231 kernel: HOME=/ Oct 2 19:37:37.759239 kernel: TERM=linux Oct 2 19:37:37.759245 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:37:37.759253 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:37:37.759262 systemd[1]: Detected virtualization vmware. Oct 2 19:37:37.759268 systemd[1]: Detected architecture x86-64. Oct 2 19:37:37.759274 systemd[1]: Running in initrd. Oct 2 19:37:37.759280 systemd[1]: No hostname configured, using default hostname. Oct 2 19:37:37.759286 systemd[1]: Hostname set to . Oct 2 19:37:37.759294 systemd[1]: Initializing machine ID from random generator. Oct 2 19:37:37.759301 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:37:37.759307 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:37:37.759313 systemd[1]: Reached target cryptsetup.target. Oct 2 19:37:37.759319 systemd[1]: Reached target paths.target. Oct 2 19:37:37.759325 systemd[1]: Reached target slices.target. Oct 2 19:37:37.759331 systemd[1]: Reached target swap.target. Oct 2 19:37:37.759338 systemd[1]: Reached target timers.target. Oct 2 19:37:37.759346 systemd[1]: Listening on iscsid.socket. Oct 2 19:37:37.759352 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:37:37.759358 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:37:37.759365 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:37:37.759371 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:37:37.759378 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:37:37.759384 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:37:37.759390 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:37:37.759398 systemd[1]: Reached target sockets.target. Oct 2 19:37:37.759404 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:37:37.759410 systemd[1]: Finished network-cleanup.service. Oct 2 19:37:37.759418 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:37:37.759429 systemd[1]: Starting systemd-journald.service... Oct 2 19:37:37.759437 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:37:37.759443 systemd[1]: Starting systemd-resolved.service... Oct 2 19:37:37.759450 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:37:37.759456 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:37:37.759464 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:37:37.759470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:37:37.759476 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:37:37.759483 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:37:37.759489 kernel: audit: type=1130 audit(1696275457.673:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.759495 kernel: audit: type=1130 audit(1696275457.677:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.759502 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:37:37.759508 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:37:37.759515 kernel: Bridge firewalling registered Oct 2 19:37:37.759522 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:37:37.759528 systemd[1]: Started systemd-resolved.service. Oct 2 19:37:37.759534 systemd[1]: Reached target nss-lookup.target. Oct 2 19:37:37.759541 kernel: audit: type=1130 audit(1696275457.700:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.759547 kernel: audit: type=1130 audit(1696275457.700:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.759553 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:37:37.759561 kernel: SCSI subsystem initialized Oct 2 19:37:37.759567 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:37:37.759573 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:37:37.759581 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:37:37.759587 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:37:37.759594 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:37:37.759602 kernel: audit: type=1130 audit(1696275457.751:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.759608 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:37:37.759619 systemd-journald[217]: Journal started Oct 2 19:37:37.759656 systemd-journald[217]: Runtime Journal (/run/log/journal/773cf916c13f49c9b350be610cfab1aa) is 4.8M, max 38.8M, 34.0M free. Oct 2 19:37:37.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.661030 systemd-modules-load[218]: Inserted module 'overlay' Oct 2 19:37:37.696743 systemd-resolved[219]: Positive Trust Anchors: Oct 2 19:37:37.696749 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:37:37.696769 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:37:37.762036 systemd[1]: Started systemd-journald.service. Oct 2 19:37:37.698910 systemd-resolved[219]: Defaulting to hostname 'linux'. Oct 2 19:37:37.699522 systemd-modules-load[218]: Inserted module 'br_netfilter' Oct 2 19:37:37.751281 systemd-modules-load[218]: Inserted module 'dm_multipath' Oct 2 19:37:37.762490 dracut-cmdline[233]: dracut-dracut-053 Oct 2 19:37:37.762490 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Oct 2 19:37:37.762490 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:37:37.768428 kernel: audit: type=1130 audit(1696275457.762:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.768481 kernel: audit: type=1130 audit(1696275457.762:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.763593 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:37:37.773240 kernel: iscsi: registered transport (tcp) Oct 2 19:37:37.787355 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:37:37.787394 kernel: QLogic iSCSI HBA Driver Oct 2 19:37:37.805509 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:37:37.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.806400 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:37:37.810411 kernel: audit: type=1130 audit(1696275457.804:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.850249 kernel: raid6: avx2x4 gen() 42102 MB/s Oct 2 19:37:37.866239 kernel: raid6: avx2x4 xor() 9815 MB/s Oct 2 19:37:37.883231 kernel: raid6: avx2x2 gen() 41844 MB/s Oct 2 19:37:37.900242 kernel: raid6: avx2x2 xor() 31702 MB/s Oct 2 19:37:37.917233 kernel: raid6: avx2x1 gen() 29411 MB/s Oct 2 19:37:37.934229 kernel: raid6: avx2x1 xor() 27553 MB/s Oct 2 19:37:37.951226 kernel: raid6: sse2x4 gen() 21005 MB/s Oct 2 19:37:37.968232 kernel: raid6: sse2x4 xor() 9391 MB/s Oct 2 19:37:37.985229 kernel: raid6: sse2x2 gen() 20900 MB/s Oct 2 19:37:38.002232 kernel: raid6: sse2x2 xor() 13298 MB/s Oct 2 19:37:38.019237 kernel: raid6: sse2x1 gen() 17677 MB/s Oct 2 19:37:38.036459 kernel: raid6: sse2x1 xor() 8796 MB/s Oct 2 19:37:38.036510 kernel: raid6: using algorithm avx2x4 gen() 42102 MB/s Oct 2 19:37:38.036522 kernel: raid6: .... xor() 9815 MB/s, rmw enabled Oct 2 19:37:38.037653 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:37:38.046228 kernel: xor: automatically using best checksumming function avx Oct 2 19:37:38.108230 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:37:38.113140 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:37:38.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:38.114000 audit: BPF prog-id=7 op=LOAD Oct 2 19:37:38.114000 audit: BPF prog-id=8 op=LOAD Oct 2 19:37:38.116612 kernel: audit: type=1130 audit(1696275458.112:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:38.116103 systemd[1]: Starting systemd-udevd.service... Oct 2 19:37:38.124846 systemd-udevd[416]: Using default interface naming scheme 'v252'. Oct 2 19:37:38.127828 systemd[1]: Started systemd-udevd.service. Oct 2 19:37:38.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:38.129143 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:37:38.137023 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Oct 2 19:37:38.155476 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:37:38.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:38.156073 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:37:38.222831 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:37:38.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:38.278223 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 2 19:37:38.281690 kernel: vmw_pvscsi: using 64bit dma Oct 2 19:37:38.281719 kernel: vmw_pvscsi: max_id: 16 Oct 2 19:37:38.281727 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 2 19:37:38.286581 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 2 19:37:38.286614 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 2 19:37:38.286622 kernel: vmw_pvscsi: using MSI-X Oct 2 19:37:38.287867 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 2 19:37:38.296220 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 2 19:37:38.298224 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 2 19:37:38.305224 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Oct 2 19:37:38.310237 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:37:38.312226 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 2 19:37:38.323228 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 2 19:37:38.325658 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:37:38.325688 kernel: AES CTR mode by8 optimization enabled Oct 2 19:37:38.328223 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 2 19:37:38.328333 kernel: libata version 3.00 loaded. Oct 2 19:37:38.333465 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Oct 2 19:37:38.334465 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 2 19:37:38.334543 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 2 19:37:38.334649 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 2 19:37:38.334749 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 2 19:37:38.337223 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 2 19:37:38.343379 kernel: scsi host1: ata_piix Oct 2 19:37:38.344239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:37:38.345605 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 2 19:37:38.347846 kernel: scsi host2: ata_piix Oct 2 19:37:38.347940 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Oct 2 19:37:38.347949 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Oct 2 19:37:38.372222 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (462) Oct 2 19:37:38.373681 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:37:38.375876 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:37:38.376156 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:37:38.376849 systemd[1]: Starting disk-uuid.service... Oct 2 19:37:38.390499 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:37:38.392769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:37:38.400226 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:37:38.406453 kernel: GPT:disk_guids don't match. Oct 2 19:37:38.406486 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:37:38.406497 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:37:38.514224 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 2 19:37:38.520269 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 2 19:37:38.545341 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 2 19:37:38.545540 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:37:38.563254 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:37:39.414234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:37:39.414523 disk-uuid[539]: The operation has completed successfully. Oct 2 19:37:39.452009 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:37:39.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.452071 systemd[1]: Finished disk-uuid.service. Oct 2 19:37:39.452645 systemd[1]: Starting verity-setup.service... Oct 2 19:37:39.463229 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:37:39.510229 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:37:39.510771 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:37:39.512726 systemd[1]: Finished verity-setup.service. Oct 2 19:37:39.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.565759 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:37:39.566220 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:37:39.566327 systemd[1]: Starting afterburn-network-kargs.service... Oct 2 19:37:39.566762 systemd[1]: Starting ignition-setup.service... Oct 2 19:37:39.581140 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:37:39.581176 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:37:39.581185 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:37:39.588463 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:37:39.596395 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:37:39.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.601846 systemd[1]: Finished ignition-setup.service. Oct 2 19:37:39.602446 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:37:39.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.662025 systemd[1]: Finished afterburn-network-kargs.service. Oct 2 19:37:39.662668 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:37:39.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.709000 audit: BPF prog-id=9 op=LOAD Oct 2 19:37:39.709964 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:37:39.710893 systemd[1]: Starting systemd-networkd.service... Oct 2 19:37:39.730380 systemd-networkd[725]: lo: Link UP Oct 2 19:37:39.730386 systemd-networkd[725]: lo: Gained carrier Oct 2 19:37:39.731019 systemd-networkd[725]: Enumeration completed Oct 2 19:37:39.735258 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 2 19:37:39.735439 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 2 19:37:39.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.731188 systemd[1]: Started systemd-networkd.service. Oct 2 19:37:39.731342 systemd[1]: Reached target network.target. Oct 2 19:37:39.731382 systemd-networkd[725]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 2 19:37:39.731858 systemd[1]: Starting iscsiuio.service... Oct 2 19:37:39.736857 systemd-networkd[725]: ens192: Link UP Oct 2 19:37:39.736862 systemd-networkd[725]: ens192: Gained carrier Oct 2 19:37:39.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.738043 systemd[1]: Started iscsiuio.service. Oct 2 19:37:39.738664 systemd[1]: Starting iscsid.service... Oct 2 19:37:39.740624 iscsid[730]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:37:39.740624 iscsid[730]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:37:39.740624 iscsid[730]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:37:39.740624 iscsid[730]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:37:39.740624 iscsid[730]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:37:39.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.742858 iscsid[730]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:37:39.741635 systemd[1]: Started iscsid.service. Oct 2 19:37:39.742166 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:37:39.749009 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:37:39.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.749461 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:37:39.749571 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:37:39.749740 systemd[1]: Reached target remote-fs.target. Oct 2 19:37:39.750702 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:37:39.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.755952 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:37:39.817398 ignition[597]: Ignition 2.14.0 Oct 2 19:37:39.817405 ignition[597]: Stage: fetch-offline Oct 2 19:37:39.817461 ignition[597]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:37:39.817484 ignition[597]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:37:39.820969 ignition[597]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:37:39.821069 ignition[597]: parsed url from cmdline: "" Oct 2 19:37:39.821073 ignition[597]: no config URL provided Oct 2 19:37:39.821077 ignition[597]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:37:39.821082 ignition[597]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:37:39.827776 ignition[597]: config successfully fetched Oct 2 19:37:39.827798 ignition[597]: parsing config with SHA512: e0bcdabcbabb249ab5df42d04755c27be3bffbfb44a90c1fe501026226e9c70e13a99904c9cd548dab673a257322d77600876fcd4d02851ef8acd91974837f1b Oct 2 19:37:39.839941 unknown[597]: fetched base config from "system" Oct 2 19:37:39.840147 systemd-resolved[219]: Detected conflict on linux IN A 139.178.70.109 Oct 2 19:37:39.840154 systemd-resolved[219]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Oct 2 19:37:39.840759 unknown[597]: fetched user config from "vmware" Oct 2 19:37:39.841225 ignition[597]: fetch-offline: fetch-offline passed Oct 2 19:37:39.841462 ignition[597]: Ignition finished successfully Oct 2 19:37:39.842095 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:37:39.842269 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:37:39.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.842729 systemd[1]: Starting ignition-kargs.service... Oct 2 19:37:39.848480 ignition[745]: Ignition 2.14.0 Oct 2 19:37:39.848737 ignition[745]: Stage: kargs Oct 2 19:37:39.848915 ignition[745]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:37:39.849073 ignition[745]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:37:39.850467 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:37:39.852132 ignition[745]: kargs: kargs passed Oct 2 19:37:39.852301 ignition[745]: Ignition finished successfully Oct 2 19:37:39.853254 systemd[1]: Finished ignition-kargs.service. Oct 2 19:37:39.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.853901 systemd[1]: Starting ignition-disks.service... Oct 2 19:37:39.858526 ignition[751]: Ignition 2.14.0 Oct 2 19:37:39.858797 ignition[751]: Stage: disks Oct 2 19:37:39.858983 ignition[751]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:37:39.859138 ignition[751]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:37:39.860505 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:37:39.861968 ignition[751]: disks: disks passed Oct 2 19:37:39.862129 ignition[751]: Ignition finished successfully Oct 2 19:37:39.862764 systemd[1]: Finished ignition-disks.service. Oct 2 19:37:39.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.862946 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:37:39.863060 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:37:39.863266 systemd[1]: Reached target local-fs.target. Oct 2 19:37:39.863421 systemd[1]: Reached target sysinit.target. Oct 2 19:37:39.863580 systemd[1]: Reached target basic.target. Oct 2 19:37:39.864262 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:37:39.887362 systemd-fsck[759]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 19:37:39.888720 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:37:39.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.889371 systemd[1]: Mounting sysroot.mount... Oct 2 19:37:39.904138 systemd[1]: Mounted sysroot.mount. Oct 2 19:37:39.904393 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:37:39.904306 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:37:39.905644 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:37:39.906080 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:37:39.906114 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:37:39.906130 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:37:39.908260 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:37:39.908799 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:37:39.912347 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:37:39.923984 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:37:39.926069 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:37:39.930001 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:37:39.983107 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:37:39.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:39.983762 systemd[1]: Starting ignition-mount.service... Oct 2 19:37:39.984267 systemd[1]: Starting sysroot-boot.service... Oct 2 19:37:39.988184 bash[810]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:37:39.993395 ignition[811]: INFO : Ignition 2.14.0 Oct 2 19:37:39.993395 ignition[811]: INFO : Stage: mount Oct 2 19:37:39.993799 ignition[811]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:37:39.993799 ignition[811]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:37:39.994853 ignition[811]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:37:39.996012 ignition[811]: INFO : mount: mount passed Oct 2 19:37:39.996012 ignition[811]: INFO : Ignition finished successfully Oct 2 19:37:39.996776 systemd[1]: Finished ignition-mount.service. Oct 2 19:37:39.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:40.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:40.009851 systemd[1]: Finished sysroot-boot.service. Oct 2 19:37:40.526897 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:37:40.535230 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (820) Oct 2 19:37:40.538051 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:37:40.538074 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:37:40.538084 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:37:40.544230 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:37:40.544346 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:37:40.545054 systemd[1]: Starting ignition-files.service... Oct 2 19:37:40.556766 ignition[840]: INFO : Ignition 2.14.0 Oct 2 19:37:40.557059 ignition[840]: INFO : Stage: files Oct 2 19:37:40.557307 ignition[840]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:37:40.557515 ignition[840]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:37:40.559477 ignition[840]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:37:40.562266 ignition[840]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:37:40.562835 ignition[840]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:37:40.563022 ignition[840]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:37:40.566133 ignition[840]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:37:40.566422 ignition[840]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:37:40.567177 unknown[840]: wrote ssh authorized keys file for user: core Oct 2 19:37:40.567415 ignition[840]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:37:40.568108 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:37:40.568363 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:37:40.753755 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:37:40.839338 ignition[840]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:37:40.839705 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:37:40.839943 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:37:40.840169 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:37:40.943724 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:37:40.978985 ignition[840]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:37:40.979342 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:37:40.979750 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:37:40.979957 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:37:41.071049 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:37:41.696737 ignition[840]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:37:41.697178 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:37:41.697178 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:37:41.697178 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:37:41.701457 systemd-networkd[725]: ens192: Gained IPv6LL Oct 2 19:37:41.763106 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:37:43.183776 ignition[840]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:37:43.184266 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:37:43.184488 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:37:43.184774 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:37:43.184959 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:37:43.185194 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:37:43.199295 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 2 19:37:43.199772 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:37:43.225139 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem61673661" Oct 2 19:37:43.225427 ignition[840]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem61673661": device or resource busy Oct 2 19:37:43.225651 ignition[840]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem61673661", trying btrfs: device or resource busy Oct 2 19:37:43.225876 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem61673661" Oct 2 19:37:43.228301 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (840) Oct 2 19:37:43.228332 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem61673661" Oct 2 19:37:43.239511 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem61673661" Oct 2 19:37:43.239869 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem61673661" Oct 2 19:37:43.240091 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 2 19:37:43.240564 systemd[1]: mnt-oem61673661.mount: Deactivated successfully. Oct 2 19:37:43.253357 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 2 19:37:43.253613 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(e): [started] processing unit "vmtoolsd.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(e): [finished] processing unit "vmtoolsd.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(f): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(f): op(10): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(f): op(10): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(f): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(13): [started] processing unit "coreos-metadata.service" Oct 2 19:37:43.253613 ignition[840]: INFO : files: op(13): op(14): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:37:43.255619 ignition[840]: INFO : files: op(13): op(14): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:37:43.255619 ignition[840]: INFO : files: op(13): [finished] processing unit "coreos-metadata.service" Oct 2 19:37:43.255619 ignition[840]: INFO : files: op(15): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:37:43.255619 ignition[840]: INFO : files: op(15): op(16): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:37:43.937479 ignition[840]: INFO : files: op(15): op(16): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: op(15): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: op(17): [started] setting preset to enabled for "vmtoolsd.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: op(17): [finished] setting preset to enabled for "vmtoolsd.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:37:43.937759 ignition[840]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:37:43.939323 ignition[840]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:37:43.939323 ignition[840]: INFO : files: files passed Oct 2 19:37:43.939323 ignition[840]: INFO : Ignition finished successfully Oct 2 19:37:43.939360 systemd[1]: Finished ignition-files.service. Oct 2 19:37:43.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.940482 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:37:43.944040 kernel: kauditd_printk_skb: 24 callbacks suppressed Oct 2 19:37:43.944054 kernel: audit: type=1130 audit(1696275463.938:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.944268 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:37:43.944940 systemd[1]: Starting ignition-quench.service... Oct 2 19:37:43.963750 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:37:43.963803 systemd[1]: Finished ignition-quench.service. Oct 2 19:37:43.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.965500 initrd-setup-root-after-ignition[866]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:37:43.968871 kernel: audit: type=1130 audit(1696275463.962:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.968888 kernel: audit: type=1131 audit(1696275463.962:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.966649 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:37:43.971523 kernel: audit: type=1130 audit(1696275463.967:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.968981 systemd[1]: Reached target ignition-complete.target. Oct 2 19:37:43.972014 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:37:43.980029 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:37:43.980079 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:37:43.980509 systemd[1]: Reached target initrd-fs.target. Oct 2 19:37:43.980714 systemd[1]: Reached target initrd.target. Oct 2 19:37:43.980935 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:37:43.981565 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:37:43.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.986604 kernel: audit: type=1130 audit(1696275463.979:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.986621 kernel: audit: type=1131 audit(1696275463.979:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.988117 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:37:43.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.988620 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:37:43.991255 kernel: audit: type=1130 audit(1696275463.987:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.994077 systemd[1]: Stopped target network.target. Oct 2 19:37:43.994325 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:37:43.994469 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:37:43.994670 systemd[1]: Stopped target timers.target. Oct 2 19:37:43.994846 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:37:43.994906 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:37:43.997611 kernel: audit: type=1131 audit(1696275463.993:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.995096 systemd[1]: Stopped target initrd.target. Oct 2 19:37:43.997541 systemd[1]: Stopped target basic.target. Oct 2 19:37:43.997716 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:37:43.997897 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:37:43.998075 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:37:43.998306 systemd[1]: Stopped target remote-fs.target. Oct 2 19:37:43.998478 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:37:43.998665 systemd[1]: Stopped target sysinit.target. Oct 2 19:37:43.998989 systemd[1]: Stopped target local-fs.target. Oct 2 19:37:43.999173 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:37:43.999354 systemd[1]: Stopped target swap.target. Oct 2 19:37:44.002147 kernel: audit: type=1131 audit(1696275463.998:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.999508 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:37:43.999567 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:37:44.004855 kernel: audit: type=1131 audit(1696275464.001:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.999774 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:37:44.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.002245 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:37:44.002304 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:37:44.002482 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:37:44.002538 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:37:44.004997 systemd[1]: Stopped target paths.target. Oct 2 19:37:44.005127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:37:44.006279 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:37:44.006433 systemd[1]: Stopped target slices.target. Oct 2 19:37:44.006621 systemd[1]: Stopped target sockets.target. Oct 2 19:37:44.006797 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:37:44.006842 systemd[1]: Closed iscsid.socket. Oct 2 19:37:44.006981 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:37:44.007021 systemd[1]: Closed iscsiuio.socket. Oct 2 19:37:44.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.007180 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:37:44.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.007260 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:37:44.007498 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:37:44.007557 systemd[1]: Stopped ignition-files.service. Oct 2 19:37:44.008258 systemd[1]: Stopping ignition-mount.service... Oct 2 19:37:44.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.009304 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:37:44.009707 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:37:44.009916 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:37:44.010007 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:37:44.010070 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:37:44.011512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:37:44.011590 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:37:44.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.016020 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:37:44.016696 ignition[879]: INFO : Ignition 2.14.0 Oct 2 19:37:44.016696 ignition[879]: INFO : Stage: umount Oct 2 19:37:44.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.016113 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:37:44.018423 ignition[879]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:37:44.018423 ignition[879]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 2 19:37:44.016832 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:37:44.016897 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:37:44.017509 systemd[1]: Stopping network-cleanup.service... Oct 2 19:37:44.017617 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:37:44.017651 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:37:44.017814 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 2 19:37:44.017835 systemd[1]: Stopped afterburn-network-kargs.service. Oct 2 19:37:44.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.019000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:37:44.019759 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:37:44.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.020517 ignition[879]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 2 19:37:44.019784 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:37:44.020510 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:37:44.020531 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:37:44.021961 ignition[879]: INFO : umount: umount passed Oct 2 19:37:44.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.023008 ignition[879]: INFO : Ignition finished successfully Oct 2 19:37:44.023805 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:37:44.024124 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:37:44.024185 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:37:44.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.024722 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:37:44.024773 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:37:44.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.025042 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:37:44.025087 systemd[1]: Stopped ignition-mount.service. Oct 2 19:37:44.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.026000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:37:44.026165 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:37:44.026190 systemd[1]: Stopped ignition-disks.service. Oct 2 19:37:44.026324 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:37:44.026344 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:37:44.026455 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:37:44.026475 systemd[1]: Stopped ignition-setup.service. Oct 2 19:37:44.028810 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:37:44.029147 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:37:44.029197 systemd[1]: Stopped network-cleanup.service. Oct 2 19:37:44.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.031565 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:37:44.031629 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:37:44.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.031976 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:37:44.031999 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:37:44.032215 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:37:44.032238 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:37:44.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.032369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:37:44.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.032389 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:37:44.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.032557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:37:44.032576 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:37:44.032721 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:37:44.032740 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:37:44.033372 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:37:44.033558 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:37:44.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.033583 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:37:44.033845 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:37:44.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.033870 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:37:44.037390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:37:44.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.037418 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:37:44.038127 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:37:44.038410 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:37:44.038455 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:37:44.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.050970 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:37:44.216453 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:37:44.216528 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:37:44.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.216956 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:37:44.217100 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:37:44.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.217132 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:37:44.217844 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:37:44.226457 systemd[1]: Switching root. Oct 2 19:37:44.243994 systemd-journald[217]: Journal stopped Oct 2 19:37:48.190361 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Oct 2 19:37:48.190381 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:37:48.190389 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:37:48.190395 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:37:48.190400 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:37:48.190407 kernel: SELinux: policy capability open_perms=1 Oct 2 19:37:48.190413 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:37:48.190419 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:37:48.190425 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:37:48.190430 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:37:48.190436 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:37:48.190441 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:37:48.190449 systemd[1]: Successfully loaded SELinux policy in 114.766ms. Oct 2 19:37:48.190456 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.337ms. Oct 2 19:37:48.190464 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:37:48.190471 systemd[1]: Detected virtualization vmware. Oct 2 19:37:48.190478 systemd[1]: Detected architecture x86-64. Oct 2 19:37:48.190484 systemd[1]: Detected first boot. Oct 2 19:37:48.190491 systemd[1]: Initializing machine ID from random generator. Oct 2 19:37:48.190497 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:37:48.190504 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:37:48.190511 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:37:48.190518 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:37:48.190525 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:37:48.190532 systemd[1]: Stopped iscsiuio.service. Oct 2 19:37:48.190538 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:37:48.190545 systemd[1]: Stopped iscsid.service. Oct 2 19:37:48.190551 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:37:48.190557 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:37:48.190564 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:37:48.190570 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:37:48.190577 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:37:48.190584 systemd[1]: Created slice system-getty.slice. Oct 2 19:37:48.190590 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:37:48.190597 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:37:48.190603 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:37:48.190996 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:37:48.191005 systemd[1]: Created slice user.slice. Oct 2 19:37:48.191011 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:37:48.191018 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:37:48.191027 systemd[1]: Set up automount boot.automount. Oct 2 19:37:48.191035 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:37:48.191042 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:37:48.191049 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:37:48.191056 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:37:48.191063 systemd[1]: Reached target integritysetup.target. Oct 2 19:37:48.191075 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:37:48.191461 systemd[1]: Reached target remote-fs.target. Oct 2 19:37:48.191474 systemd[1]: Reached target slices.target. Oct 2 19:37:48.191481 systemd[1]: Reached target swap.target. Oct 2 19:37:48.191488 systemd[1]: Reached target torcx.target. Oct 2 19:37:48.191495 systemd[1]: Reached target veritysetup.target. Oct 2 19:37:48.191502 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:37:48.191510 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:37:48.191517 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:37:48.191523 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:37:48.191530 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:37:48.191537 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:37:48.191544 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:37:48.191551 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:37:48.191557 systemd[1]: Mounting media.mount... Oct 2 19:37:48.191565 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:37:48.191572 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:37:48.191579 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:37:48.191586 systemd[1]: Mounting tmp.mount... Oct 2 19:37:48.191593 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:37:48.191600 systemd[1]: Starting ignition-delete-config.service... Oct 2 19:37:48.191607 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:37:48.191613 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:37:48.191620 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:37:48.191628 systemd[1]: Starting modprobe@drm.service... Oct 2 19:37:48.191635 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:37:48.191641 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:37:48.191648 systemd[1]: Starting modprobe@loop.service... Oct 2 19:37:48.191655 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:37:48.191662 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:37:48.191669 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:37:48.191676 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:37:48.191683 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:37:48.191691 systemd[1]: Stopped systemd-journald.service. Oct 2 19:37:48.191698 systemd[1]: Starting systemd-journald.service... Oct 2 19:37:48.191704 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:37:48.191711 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:37:48.191718 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:37:48.191725 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:37:48.191732 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:37:48.191738 systemd[1]: Stopped verity-setup.service. Oct 2 19:37:48.191745 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:37:48.191753 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:37:48.191760 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:37:48.191766 systemd[1]: Mounted media.mount. Oct 2 19:37:48.191773 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:37:48.191780 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:37:48.191787 systemd[1]: Mounted tmp.mount. Oct 2 19:37:48.191794 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:37:48.191800 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:37:48.191807 kernel: loop: module loaded Oct 2 19:37:48.191815 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:37:48.191822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:37:48.191829 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:37:48.191836 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:37:48.191842 kernel: fuse: init (API version 7.34) Oct 2 19:37:48.191848 systemd[1]: Finished modprobe@drm.service. Oct 2 19:37:48.191856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:37:48.191862 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:37:48.191869 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:37:48.191877 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:37:48.191884 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:37:48.191890 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:37:48.191897 systemd[1]: Finished modprobe@loop.service. Oct 2 19:37:48.191904 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:37:48.191910 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:37:48.191917 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:37:48.191924 systemd[1]: Reached target network-pre.target. Oct 2 19:37:48.191931 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:37:48.191941 systemd-journald[1003]: Journal started Oct 2 19:37:48.191973 systemd-journald[1003]: Runtime Journal (/run/log/journal/299259744fe84a0ebbf627c9f9a433d2) is 4.8M, max 38.8M, 34.0M free. Oct 2 19:37:44.668000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:37:48.194696 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:37:48.194715 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:37:45.494000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:37:45.494000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:37:45.494000 audit: BPF prog-id=10 op=LOAD Oct 2 19:37:45.494000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:37:45.494000 audit: BPF prog-id=11 op=LOAD Oct 2 19:37:45.494000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:37:48.050000 audit: BPF prog-id=12 op=LOAD Oct 2 19:37:48.050000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:37:48.051000 audit: BPF prog-id=13 op=LOAD Oct 2 19:37:48.051000 audit: BPF prog-id=14 op=LOAD Oct 2 19:37:48.051000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:37:48.051000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:37:48.051000 audit: BPF prog-id=15 op=LOAD Oct 2 19:37:48.051000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:37:48.051000 audit: BPF prog-id=16 op=LOAD Oct 2 19:37:48.051000 audit: BPF prog-id=17 op=LOAD Oct 2 19:37:48.051000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:37:48.051000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:37:48.053000 audit: BPF prog-id=18 op=LOAD Oct 2 19:37:48.053000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:37:48.053000 audit: BPF prog-id=19 op=LOAD Oct 2 19:37:48.053000 audit: BPF prog-id=20 op=LOAD Oct 2 19:37:48.053000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:37:48.053000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:37:48.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.056000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:37:48.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.133000 audit: BPF prog-id=21 op=LOAD Oct 2 19:37:48.133000 audit: BPF prog-id=22 op=LOAD Oct 2 19:37:48.133000 audit: BPF prog-id=23 op=LOAD Oct 2 19:37:48.133000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:37:48.133000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:37:48.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.187000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:37:48.187000 audit[1003]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc770e5840 a2=4000 a3=7ffc770e58dc items=0 ppid=1 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:48.187000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:37:48.050802 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:37:46.098626 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:37:48.055076 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:37:46.100949 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:37:46.100962 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:37:46.100983 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:37:46.100990 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:37:46.101011 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:37:46.101018 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:37:46.101141 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:37:48.198968 jq[979]: true Oct 2 19:37:46.101165 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:37:46.101173 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:37:46.104423 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:37:46.104443 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:37:48.199354 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:37:46.104455 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:37:46.104463 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:37:46.104472 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:37:46.104480 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:37:47.823799 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:48.206396 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:37:48.206432 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:37:48.206443 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:37:47.823969 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:47.824040 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:47.824145 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:48.206779 jq[1017]: true Oct 2 19:37:47.824178 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:37:47.824242 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2023-10-02T19:37:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:37:48.224361 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:37:48.224390 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:37:48.224401 systemd[1]: Started systemd-journald.service. Oct 2 19:37:48.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.220345 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:37:48.220578 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:37:48.228653 systemd-journald[1003]: Time spent on flushing to /var/log/journal/299259744fe84a0ebbf627c9f9a433d2 is 48.794ms for 2020 entries. Oct 2 19:37:48.228653 systemd-journald[1003]: System Journal (/var/log/journal/299259744fe84a0ebbf627c9f9a433d2) is 8.0M, max 584.8M, 576.8M free. Oct 2 19:37:48.313935 systemd-journald[1003]: Received client request to flush runtime journal. Oct 2 19:37:48.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.223094 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:37:48.231761 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:37:48.232010 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:37:48.242060 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:37:48.314706 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:37:48.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.322473 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:37:48.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.323690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:37:48.328871 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:37:48.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.330019 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:37:48.337320 udevadm[1044]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:37:48.379225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:37:48.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.431861 ignition[1026]: Ignition 2.14.0 Oct 2 19:37:48.432238 ignition[1026]: deleting config from guestinfo properties Oct 2 19:37:48.442138 ignition[1026]: Successfully deleted config Oct 2 19:37:48.442959 systemd[1]: Finished ignition-delete-config.service. Oct 2 19:37:48.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.776020 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:37:48.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.775000 audit: BPF prog-id=24 op=LOAD Oct 2 19:37:48.775000 audit: BPF prog-id=25 op=LOAD Oct 2 19:37:48.775000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:37:48.775000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:37:48.777317 systemd[1]: Starting systemd-udevd.service... Oct 2 19:37:48.789006 systemd-udevd[1046]: Using default interface naming scheme 'v252'. Oct 2 19:37:48.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.812000 audit: BPF prog-id=26 op=LOAD Oct 2 19:37:48.812717 systemd[1]: Started systemd-udevd.service. Oct 2 19:37:48.814221 systemd[1]: Starting systemd-networkd.service... Oct 2 19:37:48.829000 audit: BPF prog-id=27 op=LOAD Oct 2 19:37:48.829000 audit: BPF prog-id=28 op=LOAD Oct 2 19:37:48.829000 audit: BPF prog-id=29 op=LOAD Oct 2 19:37:48.831325 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:37:48.844075 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:37:48.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.870283 systemd[1]: Started systemd-userdbd.service. Oct 2 19:37:48.880231 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:37:48.889257 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:37:48.962000 audit[1060]: AVC avc: denied { confidentiality } for pid=1060 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:37:48.965919 kernel: kauditd_printk_skb: 111 callbacks suppressed Oct 2 19:37:48.965957 kernel: audit: type=1400 audit(1696275468.962:154): avc: denied { confidentiality } for pid=1060 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:37:48.962000 audit[1060]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5627454767f0 a1=32194 a2=7f7a8301fbc5 a3=5 items=106 ppid=1046 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:48.974248 kernel: audit: type=1300 audit(1696275468.962:154): arch=c000003e syscall=175 success=yes exit=0 a0=5627454767f0 a1=32194 a2=7f7a8301fbc5 a3=5 items=106 ppid=1046 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:48.978258 kernel: audit: type=1307 audit(1696275468.962:154): cwd="/" Oct 2 19:37:48.978331 kernel: audit: type=1302 audit(1696275468.962:154): item=0 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.978348 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Oct 2 19:37:48.962000 audit: CWD cwd="/" Oct 2 19:37:48.962000 audit: PATH item=0 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.982867 kernel: audit: type=1302 audit(1696275468.962:154): item=1 name=(null) inode=25014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.982922 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 2 19:37:48.962000 audit: PATH item=1 name=(null) inode=25014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.990411 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 2 19:37:48.990580 kernel: audit: type=1302 audit(1696275468.962:154): item=2 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=2 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=3 name=(null) inode=25015 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.994219 kernel: audit: type=1302 audit(1696275468.962:154): item=3 name=(null) inode=25015 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=4 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.998219 kernel: audit: type=1302 audit(1696275468.962:154): item=4 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=5 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:49.002369 kernel: audit: type=1302 audit(1696275468.962:154): item=5 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=6 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:49.005306 kernel: audit: type=1302 audit(1696275468.962:154): item=6 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=7 name=(null) inode=25017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=8 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=9 name=(null) inode=25018 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=10 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=11 name=(null) inode=25019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=12 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=13 name=(null) inode=25020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=14 name=(null) inode=25016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=15 name=(null) inode=25021 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=16 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=17 name=(null) inode=25022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=18 name=(null) inode=25022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=19 name=(null) inode=25023 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=20 name=(null) inode=25022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=21 name=(null) inode=25024 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=22 name=(null) inode=25022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=23 name=(null) inode=25025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=24 name=(null) inode=25022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=25 name=(null) inode=25026 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=26 name=(null) inode=25022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=27 name=(null) inode=25027 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=28 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=29 name=(null) inode=25028 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=30 name=(null) inode=25028 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=31 name=(null) inode=25029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=32 name=(null) inode=25028 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=33 name=(null) inode=25030 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=34 name=(null) inode=25028 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=35 name=(null) inode=25031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=36 name=(null) inode=25028 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=37 name=(null) inode=25032 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=38 name=(null) inode=25028 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=39 name=(null) inode=25033 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=40 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=41 name=(null) inode=25034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=42 name=(null) inode=25034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=43 name=(null) inode=25035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=44 name=(null) inode=25034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=45 name=(null) inode=25036 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=46 name=(null) inode=25034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=47 name=(null) inode=25037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=48 name=(null) inode=25034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=49 name=(null) inode=25038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=50 name=(null) inode=25034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=51 name=(null) inode=25039 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=52 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=53 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=54 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=55 name=(null) inode=25041 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=56 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=57 name=(null) inode=25042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=58 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=59 name=(null) inode=25043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=60 name=(null) inode=25043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=61 name=(null) inode=25044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=62 name=(null) inode=25043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=63 name=(null) inode=25045 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=64 name=(null) inode=25043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=65 name=(null) inode=25046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=66 name=(null) inode=25043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=67 name=(null) inode=25047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=68 name=(null) inode=25043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=69 name=(null) inode=25048 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=70 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=71 name=(null) inode=25049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=72 name=(null) inode=25049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=73 name=(null) inode=25050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=74 name=(null) inode=25049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=75 name=(null) inode=25051 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=76 name=(null) inode=25049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=77 name=(null) inode=25052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=78 name=(null) inode=25049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=79 name=(null) inode=25053 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=80 name=(null) inode=25049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=81 name=(null) inode=25054 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=82 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=83 name=(null) inode=25055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=84 name=(null) inode=25055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=85 name=(null) inode=25056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=86 name=(null) inode=25055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=87 name=(null) inode=25057 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=88 name=(null) inode=25055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=89 name=(null) inode=25058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=90 name=(null) inode=25055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=91 name=(null) inode=25059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=92 name=(null) inode=25055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=93 name=(null) inode=25060 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=94 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=95 name=(null) inode=25061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=96 name=(null) inode=25061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=97 name=(null) inode=25062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=98 name=(null) inode=25061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=99 name=(null) inode=25063 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=100 name=(null) inode=25061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=101 name=(null) inode=25064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=102 name=(null) inode=25061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=103 name=(null) inode=25065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=104 name=(null) inode=25061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PATH item=105 name=(null) inode=25066 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:48.962000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:37:49.009717 kernel: Guest personality initialized and is active Oct 2 19:37:49.009754 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 2 19:37:49.009768 kernel: Initialized host personality Oct 2 19:37:49.015717 (udev-worker)[1059]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 2 19:37:49.019252 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:37:49.027222 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:37:49.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.048785 systemd-networkd[1054]: lo: Link UP Oct 2 19:37:49.048791 systemd-networkd[1054]: lo: Gained carrier Oct 2 19:37:49.049079 systemd-networkd[1054]: Enumeration completed Oct 2 19:37:49.049131 systemd[1]: Started systemd-networkd.service. Oct 2 19:37:49.049655 systemd-networkd[1054]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 2 19:37:49.053307 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 2 19:37:49.053626 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 2 19:37:49.055237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Oct 2 19:37:49.055393 systemd-networkd[1054]: ens192: Link UP Oct 2 19:37:49.055547 systemd-networkd[1054]: ens192: Gained carrier Oct 2 19:37:49.059246 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1055) Oct 2 19:37:49.068767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:37:49.074624 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:37:49.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.076146 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:37:49.098170 lvm[1079]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:37:49.122054 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:37:49.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.122356 systemd[1]: Reached target cryptsetup.target. Oct 2 19:37:49.123963 systemd[1]: Starting lvm2-activation.service... Oct 2 19:37:49.127026 lvm[1080]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:37:49.146049 systemd[1]: Finished lvm2-activation.service. Oct 2 19:37:49.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.146315 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:37:49.146457 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:37:49.146481 systemd[1]: Reached target local-fs.target. Oct 2 19:37:49.146611 systemd[1]: Reached target machines.target. Oct 2 19:37:49.148180 systemd[1]: Starting ldconfig.service... Oct 2 19:37:49.149375 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:37:49.149439 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:37:49.151085 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:37:49.152433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:37:49.155243 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:37:49.155453 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:37:49.155490 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:37:49.156794 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:37:49.166625 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1082 (bootctl) Oct 2 19:37:49.167517 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:37:49.171584 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:37:49.176073 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:37:49.183468 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:37:49.190603 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:37:49.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.545446 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:37:49.545904 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:37:49.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.652914 systemd-fsck[1091]: fsck.fat 4.2 (2021-01-31) Oct 2 19:37:49.652914 systemd-fsck[1091]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 19:37:49.654317 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:37:49.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.655396 systemd[1]: Mounting boot.mount... Oct 2 19:37:49.827311 systemd[1]: Mounted boot.mount. Oct 2 19:37:49.999125 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:37:49.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:50.085294 systemd-networkd[1054]: ens192: Gained IPv6LL Oct 2 19:37:50.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:50.247595 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:37:50.248901 systemd[1]: Starting audit-rules.service... Oct 2 19:37:50.249999 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:37:50.251033 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:37:50.250000 audit: BPF prog-id=30 op=LOAD Oct 2 19:37:50.251000 audit: BPF prog-id=31 op=LOAD Oct 2 19:37:50.252135 systemd[1]: Starting systemd-resolved.service... Oct 2 19:37:50.253201 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:37:50.255389 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:37:50.274000 audit[1099]: SYSTEM_BOOT pid=1099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:37:50.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:50.276246 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:37:50.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:50.298451 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:37:50.298618 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:37:50.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:50.341899 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:37:50.342105 systemd[1]: Reached target time-set.target. Oct 2 19:37:50.376685 systemd-resolved[1097]: Positive Trust Anchors: Oct 2 19:37:50.376702 systemd-resolved[1097]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:37:50.376722 systemd-resolved[1097]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:37:50.491641 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:37:50.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:50.559000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:37:50.559000 audit[1115]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff6db4b040 a2=420 a3=0 items=0 ppid=1094 pid=1115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:50.559000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:37:50.560748 augenrules[1115]: No rules Oct 2 19:37:50.561255 systemd[1]: Finished audit-rules.service. Oct 2 19:38:24.168549 systemd-timesyncd[1098]: Contacted time server 66.220.10.2:123 (0.flatcar.pool.ntp.org). Oct 2 19:38:24.168726 systemd-timesyncd[1098]: Initial clock synchronization to Mon 2023-10-02 19:38:24.168497 UTC. Oct 2 19:38:24.204958 systemd-resolved[1097]: Defaulting to hostname 'linux'. Oct 2 19:38:24.206161 systemd[1]: Started systemd-resolved.service. Oct 2 19:38:24.206328 systemd[1]: Reached target network.target. Oct 2 19:38:24.206425 systemd[1]: Reached target nss-lookup.target. Oct 2 19:38:24.626039 ldconfig[1081]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:38:24.643523 systemd[1]: Finished ldconfig.service. Oct 2 19:38:24.644560 systemd[1]: Starting systemd-update-done.service... Oct 2 19:38:24.649218 systemd[1]: Finished systemd-update-done.service. Oct 2 19:38:24.649385 systemd[1]: Reached target sysinit.target. Oct 2 19:38:24.649520 systemd[1]: Started motdgen.path. Oct 2 19:38:24.649615 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:38:24.649792 systemd[1]: Started logrotate.timer. Oct 2 19:38:24.649927 systemd[1]: Started mdadm.timer. Oct 2 19:38:24.650006 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:38:24.650105 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:38:24.650120 systemd[1]: Reached target paths.target. Oct 2 19:38:24.650201 systemd[1]: Reached target timers.target. Oct 2 19:38:24.650434 systemd[1]: Listening on dbus.socket. Oct 2 19:38:24.651276 systemd[1]: Starting docker.socket... Oct 2 19:38:24.653284 systemd[1]: Listening on sshd.socket. Oct 2 19:38:24.653443 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:38:24.653694 systemd[1]: Listening on docker.socket. Oct 2 19:38:24.653821 systemd[1]: Reached target sockets.target. Oct 2 19:38:24.653911 systemd[1]: Reached target basic.target. Oct 2 19:38:24.654114 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:38:24.654137 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:38:24.654835 systemd[1]: Starting containerd.service... Oct 2 19:38:24.655637 systemd[1]: Starting dbus.service... Oct 2 19:38:24.656722 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:38:24.658807 systemd[1]: Starting extend-filesystems.service... Oct 2 19:38:24.659081 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:38:24.662029 jq[1125]: false Oct 2 19:38:24.660742 systemd[1]: Starting motdgen.service... Oct 2 19:38:24.662327 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:38:24.663614 systemd[1]: Starting prepare-critools.service... Oct 2 19:38:24.666155 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:38:24.667744 systemd[1]: Starting sshd-keygen.service... Oct 2 19:38:24.672483 systemd[1]: Starting systemd-logind.service... Oct 2 19:38:24.672629 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:38:24.672661 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:38:24.673378 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:38:24.673827 systemd[1]: Starting update-engine.service... Oct 2 19:38:24.674886 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:38:24.676439 systemd[1]: Starting vmtoolsd.service... Oct 2 19:38:24.677555 jq[1138]: true Oct 2 19:38:24.678946 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:38:24.679084 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:38:24.682093 extend-filesystems[1126]: Found sda Oct 2 19:38:24.682422 extend-filesystems[1126]: Found sda1 Oct 2 19:38:24.682567 extend-filesystems[1126]: Found sda2 Oct 2 19:38:24.685141 extend-filesystems[1126]: Found sda3 Oct 2 19:38:24.685317 extend-filesystems[1126]: Found usr Oct 2 19:38:24.685459 extend-filesystems[1126]: Found sda4 Oct 2 19:38:24.685621 extend-filesystems[1126]: Found sda6 Oct 2 19:38:24.685763 extend-filesystems[1126]: Found sda7 Oct 2 19:38:24.685918 extend-filesystems[1126]: Found sda9 Oct 2 19:38:24.685963 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:38:24.686121 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:38:24.686171 extend-filesystems[1126]: Checking size of /dev/sda9 Oct 2 19:38:24.698777 tar[1141]: ./ Oct 2 19:38:24.698777 tar[1141]: ./macvlan Oct 2 19:38:24.699427 tar[1142]: crictl Oct 2 19:38:24.701689 systemd[1]: Started vmtoolsd.service. Oct 2 19:38:24.703033 jq[1143]: true Oct 2 19:38:24.723458 dbus-daemon[1124]: [system] SELinux support is enabled Oct 2 19:38:24.723715 systemd[1]: Started dbus.service. Oct 2 19:38:24.725091 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:38:24.725107 systemd[1]: Reached target system-config.target. Oct 2 19:38:24.725224 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:38:24.725232 systemd[1]: Reached target user-config.target. Oct 2 19:38:24.726264 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:38:24.726360 systemd[1]: Finished motdgen.service. Oct 2 19:38:24.739795 extend-filesystems[1126]: Old size kept for /dev/sda9 Oct 2 19:38:24.740206 extend-filesystems[1126]: Found sr0 Oct 2 19:38:24.741395 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:38:24.741493 systemd[1]: Finished extend-filesystems.service. Oct 2 19:38:24.771047 kernel: NET: Registered PF_VSOCK protocol family Oct 2 19:38:24.780304 update_engine[1137]: I1002 19:38:24.779287 1137 main.cc:92] Flatcar Update Engine starting Oct 2 19:38:24.783699 systemd[1]: Started update-engine.service. Oct 2 19:38:24.783945 update_engine[1137]: I1002 19:38:24.783741 1137 update_check_scheduler.cc:74] Next update check in 7m56s Oct 2 19:38:24.785971 systemd[1]: Started locksmithd.service. Oct 2 19:38:24.790519 bash[1180]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:38:24.791402 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:38:24.800339 systemd-logind[1134]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:38:24.800593 systemd-logind[1134]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:38:24.800766 systemd-logind[1134]: New seat seat0. Oct 2 19:38:24.801982 systemd[1]: Started systemd-logind.service. Oct 2 19:38:24.810462 env[1150]: time="2023-10-02T19:38:24.810414853Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:38:24.826594 tar[1141]: ./static Oct 2 19:38:24.846777 env[1150]: time="2023-10-02T19:38:24.846739044Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:38:24.846872 env[1150]: time="2023-10-02T19:38:24.846848443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:38:24.847798 env[1150]: time="2023-10-02T19:38:24.847771068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:38:24.847798 env[1150]: time="2023-10-02T19:38:24.847792919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:38:24.847947 env[1150]: time="2023-10-02T19:38:24.847930410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:38:24.847947 env[1150]: time="2023-10-02T19:38:24.847944200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:38:24.848003 env[1150]: time="2023-10-02T19:38:24.847952354Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:38:24.848003 env[1150]: time="2023-10-02T19:38:24.847958409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:38:24.848047 env[1150]: time="2023-10-02T19:38:24.848002875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:38:24.848163 env[1150]: time="2023-10-02T19:38:24.848149864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:38:24.848235 env[1150]: time="2023-10-02T19:38:24.848220410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:38:24.848235 env[1150]: time="2023-10-02T19:38:24.848232947Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:38:24.848288 env[1150]: time="2023-10-02T19:38:24.848261955Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:38:24.848288 env[1150]: time="2023-10-02T19:38:24.848270730Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:38:24.852089 env[1150]: time="2023-10-02T19:38:24.852059904Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:38:24.852089 env[1150]: time="2023-10-02T19:38:24.852091179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:38:24.852216 env[1150]: time="2023-10-02T19:38:24.852101146Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:38:24.852216 env[1150]: time="2023-10-02T19:38:24.852133026Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852216 env[1150]: time="2023-10-02T19:38:24.852144435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852216 env[1150]: time="2023-10-02T19:38:24.852152749Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852216 env[1150]: time="2023-10-02T19:38:24.852163014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852301 env[1150]: time="2023-10-02T19:38:24.852178490Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852301 env[1150]: time="2023-10-02T19:38:24.852234698Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852301 env[1150]: time="2023-10-02T19:38:24.852248218Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852301 env[1150]: time="2023-10-02T19:38:24.852261721Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852301 env[1150]: time="2023-10-02T19:38:24.852273522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:38:24.852399 env[1150]: time="2023-10-02T19:38:24.852384308Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:38:24.852475 env[1150]: time="2023-10-02T19:38:24.852443482Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:38:24.852637 env[1150]: time="2023-10-02T19:38:24.852623838Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:38:24.852669 env[1150]: time="2023-10-02T19:38:24.852645147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852669 env[1150]: time="2023-10-02T19:38:24.852654552Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:38:24.852703 env[1150]: time="2023-10-02T19:38:24.852691488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852703 env[1150]: time="2023-10-02T19:38:24.852700935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852738 env[1150]: time="2023-10-02T19:38:24.852708716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852738 env[1150]: time="2023-10-02T19:38:24.852715772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852738 env[1150]: time="2023-10-02T19:38:24.852722515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852738 env[1150]: time="2023-10-02T19:38:24.852729637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852804 env[1150]: time="2023-10-02T19:38:24.852736230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852804 env[1150]: time="2023-10-02T19:38:24.852781724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852804 env[1150]: time="2023-10-02T19:38:24.852791168Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:38:24.852877 env[1150]: time="2023-10-02T19:38:24.852864531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852906 env[1150]: time="2023-10-02T19:38:24.852879068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852906 env[1150]: time="2023-10-02T19:38:24.852889423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.852906 env[1150]: time="2023-10-02T19:38:24.852896582Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:38:24.852957 env[1150]: time="2023-10-02T19:38:24.852906681Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:38:24.852957 env[1150]: time="2023-10-02T19:38:24.852913739Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:38:24.852957 env[1150]: time="2023-10-02T19:38:24.852925379Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:38:24.852957 env[1150]: time="2023-10-02T19:38:24.852950135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:38:24.853145 env[1150]: time="2023-10-02T19:38:24.853110784Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.853148786Z" level=info msg="Connect containerd service" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.853171363Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.853584168Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.853751857Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.853784914Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.854686174Z" level=info msg="Start subscribing containerd event" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.854715863Z" level=info msg="Start recovering state" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.854775783Z" level=info msg="Start event monitor" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.854788165Z" level=info msg="Start snapshots syncer" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.854794771Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:38:24.856567 env[1150]: time="2023-10-02T19:38:24.854799117Z" level=info msg="Start streaming server" Oct 2 19:38:24.853883 systemd[1]: Started containerd.service. Oct 2 19:38:24.861031 env[1150]: time="2023-10-02T19:38:24.860069014Z" level=info msg="containerd successfully booted in 0.061528s" Oct 2 19:38:24.875203 tar[1141]: ./vlan Oct 2 19:38:24.925342 tar[1141]: ./portmap Oct 2 19:38:24.979617 tar[1141]: ./host-local Oct 2 19:38:25.033679 tar[1141]: ./vrf Oct 2 19:38:25.078679 tar[1141]: ./bridge Oct 2 19:38:25.085747 systemd[1]: Finished prepare-critools.service. Oct 2 19:38:25.114313 tar[1141]: ./tuning Oct 2 19:38:25.137644 tar[1141]: ./firewall Oct 2 19:38:25.166909 tar[1141]: ./host-device Oct 2 19:38:25.190828 tar[1141]: ./sbr Oct 2 19:38:25.213992 tar[1141]: ./loopback Oct 2 19:38:25.234114 tar[1141]: ./dhcp Oct 2 19:38:25.291214 tar[1141]: ./ptp Oct 2 19:38:25.316116 tar[1141]: ./ipvlan Oct 2 19:38:25.340399 tar[1141]: ./bandwidth Oct 2 19:38:25.369497 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:38:25.446263 locksmithd[1186]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:38:26.179466 sshd_keygen[1156]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:38:26.191973 systemd[1]: Finished sshd-keygen.service. Oct 2 19:38:26.193139 systemd[1]: Starting issuegen.service... Oct 2 19:38:26.196505 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:38:26.196605 systemd[1]: Finished issuegen.service. Oct 2 19:38:26.197709 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:38:26.201557 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:38:26.202484 systemd[1]: Started getty@tty1.service. Oct 2 19:38:26.203307 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:38:26.203498 systemd[1]: Reached target getty.target. Oct 2 19:38:26.203633 systemd[1]: Reached target multi-user.target. Oct 2 19:38:26.204588 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:38:26.209318 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:38:26.209423 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:38:26.209601 systemd[1]: Startup finished in 902ms (kernel) + 6.877s (initrd) + 8.180s (userspace) = 15.960s. Oct 2 19:38:26.228737 login[1256]: pam_lastlog(login:session): file /var/log/lastlog is locked/read Oct 2 19:38:26.229156 login[1255]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:38:26.236281 systemd[1]: Created slice user-500.slice. Oct 2 19:38:26.237159 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:38:26.239876 systemd-logind[1134]: New session 2 of user core. Oct 2 19:38:26.242824 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:38:26.243868 systemd[1]: Starting user@500.service... Oct 2 19:38:26.246081 (systemd)[1259]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:38:26.290614 systemd[1259]: Queued start job for default target default.target. Oct 2 19:38:26.290952 systemd[1259]: Reached target paths.target. Oct 2 19:38:26.290966 systemd[1259]: Reached target sockets.target. Oct 2 19:38:26.290974 systemd[1259]: Reached target timers.target. Oct 2 19:38:26.290981 systemd[1259]: Reached target basic.target. Oct 2 19:38:26.291003 systemd[1259]: Reached target default.target. Oct 2 19:38:26.291033 systemd[1259]: Startup finished in 41ms. Oct 2 19:38:26.291056 systemd[1]: Started user@500.service. Oct 2 19:38:26.291827 systemd[1]: Started session-2.scope. Oct 2 19:38:27.230485 login[1256]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:38:27.232870 systemd-logind[1134]: New session 1 of user core. Oct 2 19:38:27.233568 systemd[1]: Started session-1.scope. Oct 2 19:39:04.838072 systemd[1]: Created slice system-sshd.slice. Oct 2 19:39:04.838892 systemd[1]: Started sshd@0-139.178.70.109:22-86.109.11.97:55334.service. Oct 2 19:39:04.890667 sshd[1280]: Accepted publickey for core from 86.109.11.97 port 55334 ssh2: RSA SHA256:uyYvwjSi6dUkOr9tTVVEmRqFXXvzpDpEUaVQqzpLg1k Oct 2 19:39:04.891541 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:04.895447 systemd[1]: Started session-3.scope. Oct 2 19:39:04.896057 systemd-logind[1134]: New session 3 of user core. Oct 2 19:39:04.945430 systemd[1]: Started sshd@1-139.178.70.109:22-86.109.11.97:55350.service. Oct 2 19:39:04.983208 sshd[1285]: Accepted publickey for core from 86.109.11.97 port 55350 ssh2: RSA SHA256:uyYvwjSi6dUkOr9tTVVEmRqFXXvzpDpEUaVQqzpLg1k Oct 2 19:39:04.983953 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:04.986819 systemd[1]: Started session-4.scope. Oct 2 19:39:04.987044 systemd-logind[1134]: New session 4 of user core. Oct 2 19:39:05.036679 sshd[1285]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:05.039357 systemd[1]: Started sshd@2-139.178.70.109:22-86.109.11.97:55356.service. Oct 2 19:39:05.040723 systemd-logind[1134]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:39:05.040917 systemd[1]: sshd@1-139.178.70.109:22-86.109.11.97:55350.service: Deactivated successfully. Oct 2 19:39:05.041513 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:39:05.042308 systemd-logind[1134]: Removed session 4. Oct 2 19:39:05.079848 sshd[1290]: Accepted publickey for core from 86.109.11.97 port 55356 ssh2: RSA SHA256:uyYvwjSi6dUkOr9tTVVEmRqFXXvzpDpEUaVQqzpLg1k Oct 2 19:39:05.080802 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:05.083607 systemd[1]: Started session-5.scope. Oct 2 19:39:05.083974 systemd-logind[1134]: New session 5 of user core. Oct 2 19:39:05.131311 sshd[1290]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:05.133560 systemd[1]: Started sshd@3-139.178.70.109:22-86.109.11.97:55360.service. Oct 2 19:39:05.134115 systemd[1]: sshd@2-139.178.70.109:22-86.109.11.97:55356.service: Deactivated successfully. Oct 2 19:39:05.134511 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:39:05.136227 systemd-logind[1134]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:39:05.136811 systemd-logind[1134]: Removed session 5. Oct 2 19:39:05.171493 sshd[1296]: Accepted publickey for core from 86.109.11.97 port 55360 ssh2: RSA SHA256:uyYvwjSi6dUkOr9tTVVEmRqFXXvzpDpEUaVQqzpLg1k Oct 2 19:39:05.172444 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:05.175473 systemd[1]: Started session-6.scope. Oct 2 19:39:05.176063 systemd-logind[1134]: New session 6 of user core. Oct 2 19:39:05.225961 sshd[1296]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:05.227998 systemd[1]: Started sshd@4-139.178.70.109:22-86.109.11.97:55372.service. Oct 2 19:39:05.228250 systemd[1]: sshd@3-139.178.70.109:22-86.109.11.97:55360.service: Deactivated successfully. Oct 2 19:39:05.228633 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:39:05.228969 systemd-logind[1134]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:39:05.229578 systemd-logind[1134]: Removed session 6. Oct 2 19:39:05.266695 sshd[1302]: Accepted publickey for core from 86.109.11.97 port 55372 ssh2: RSA SHA256:uyYvwjSi6dUkOr9tTVVEmRqFXXvzpDpEUaVQqzpLg1k Oct 2 19:39:05.267455 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:05.269923 systemd-logind[1134]: New session 7 of user core. Oct 2 19:39:05.270402 systemd[1]: Started session-7.scope. Oct 2 19:39:05.330240 sudo[1306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:39:05.330419 sudo[1306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:05.338441 dbus-daemon[1124]: \xd0\xdd\xf0\u001b\xb8U: received setenforce notice (enforcing=-335699152) Oct 2 19:39:05.338524 sudo[1306]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:05.340510 sshd[1302]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:05.343167 systemd[1]: Started sshd@5-139.178.70.109:22-86.109.11.97:55388.service. Oct 2 19:39:05.343465 systemd[1]: sshd@4-139.178.70.109:22-86.109.11.97:55372.service: Deactivated successfully. Oct 2 19:39:05.343862 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:39:05.344879 systemd-logind[1134]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:39:05.345685 systemd-logind[1134]: Removed session 7. Oct 2 19:39:05.381457 sshd[1309]: Accepted publickey for core from 86.109.11.97 port 55388 ssh2: RSA SHA256:uyYvwjSi6dUkOr9tTVVEmRqFXXvzpDpEUaVQqzpLg1k Oct 2 19:39:05.382411 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:05.385722 systemd[1]: Started session-8.scope. Oct 2 19:39:05.386523 systemd-logind[1134]: New session 8 of user core. Oct 2 19:39:05.436826 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:39:05.436950 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:05.438570 sudo[1314]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:05.441283 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:39:05.441396 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:05.447386 systemd[1]: Stopping audit-rules.service... Oct 2 19:39:05.451028 kernel: kauditd_printk_skb: 119 callbacks suppressed Oct 2 19:39:05.451086 kernel: audit: type=1305 audit(1696275545.447:172): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:05.451104 kernel: audit: type=1300 audit(1696275545.447:172): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5a54f3d0 a2=420 a3=0 items=0 ppid=1 pid=1317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:05.447000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:05.447000 audit[1317]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5a54f3d0 a2=420 a3=0 items=0 ppid=1 pid=1317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:05.447000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:05.454447 auditctl[1317]: No rules Oct 2 19:39:05.455367 kernel: audit: type=1327 audit(1696275545.447:172): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:05.455523 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:39:05.455634 systemd[1]: Stopped audit-rules.service. Oct 2 19:39:05.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.456769 systemd[1]: Starting audit-rules.service... Oct 2 19:39:05.460049 kernel: audit: type=1131 audit(1696275545.454:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.469199 augenrules[1334]: No rules Oct 2 19:39:05.469656 systemd[1]: Finished audit-rules.service. Oct 2 19:39:05.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.472842 sudo[1313]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:05.473029 kernel: audit: type=1130 audit(1696275545.468:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.472000 audit[1313]: USER_END pid=1313 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.478539 kernel: audit: type=1106 audit(1696275545.472:175): pid=1313 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.478591 kernel: audit: type=1104 audit(1696275545.472:176): pid=1313 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.472000 audit[1313]: CRED_DISP pid=1313 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.477992 systemd[1]: sshd@5-139.178.70.109:22-86.109.11.97:55388.service: Deactivated successfully. Oct 2 19:39:05.476502 sshd[1309]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:05.478302 systemd[1]: session-8.scope: Deactivated successfully. Oct 2 19:39:05.482006 kernel: audit: type=1106 audit(1696275545.476:177): pid=1309 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.476000 audit[1309]: USER_END pid=1309 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.481147 systemd-logind[1134]: Session 8 logged out. Waiting for processes to exit. Oct 2 19:39:05.481784 systemd[1]: Started sshd@6-139.178.70.109:22-86.109.11.97:55404.service. Oct 2 19:39:05.476000 audit[1309]: CRED_DISP pid=1309 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.488373 systemd-logind[1134]: Removed session 8. Oct 2 19:39:05.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.109:22-86.109.11.97:55388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.491480 kernel: audit: type=1104 audit(1696275545.476:178): pid=1309 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.491506 kernel: audit: type=1131 audit(1696275545.477:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.109:22-86.109.11.97:55388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.109:22-86.109.11.97:55404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.519567 sshd[1340]: Accepted publickey for core from 86.109.11.97 port 55404 ssh2: RSA SHA256:uyYvwjSi6dUkOr9tTVVEmRqFXXvzpDpEUaVQqzpLg1k Oct 2 19:39:05.518000 audit[1340]: USER_ACCT pid=1340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.519000 audit[1340]: CRED_ACQ pid=1340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.519000 audit[1340]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec4d75350 a2=3 a3=0 items=0 ppid=1 pid=1340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:05.519000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:39:05.520858 sshd[1340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:05.523431 systemd-logind[1134]: New session 9 of user core. Oct 2 19:39:05.523948 systemd[1]: Started session-9.scope. Oct 2 19:39:05.525000 audit[1340]: USER_START pid=1340 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.526000 audit[1342]: CRED_ACQ pid=1342 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:05.571000 audit[1343]: USER_ACCT pid=1343 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.571000 audit[1343]: CRED_REFR pid=1343 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:05.572837 sudo[1343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:39:05.572953 sudo[1343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:05.572000 audit[1343]: USER_START pid=1343 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.066138 systemd[1]: Reloading. Oct 2 19:39:06.113513 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2023-10-02T19:39:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:06.113530 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2023-10-02T19:39:06Z" level=info msg="torcx already run" Oct 2 19:39:06.164740 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:06.164851 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:06.177696 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.211000 audit: BPF prog-id=37 op=LOAD Oct 2 19:39:06.212000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit: BPF prog-id=38 op=LOAD Oct 2 19:39:06.212000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit: BPF prog-id=39 op=LOAD Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit: BPF prog-id=40 op=LOAD Oct 2 19:39:06.213000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:39:06.213000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.214000 audit: BPF prog-id=41 op=LOAD Oct 2 19:39:06.214000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit: BPF prog-id=42 op=LOAD Oct 2 19:39:06.216000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.216000 audit: BPF prog-id=43 op=LOAD Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit: BPF prog-id=44 op=LOAD Oct 2 19:39:06.217000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:39:06.217000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.218000 audit: BPF prog-id=45 op=LOAD Oct 2 19:39:06.218000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit: BPF prog-id=46 op=LOAD Oct 2 19:39:06.219000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit: BPF prog-id=47 op=LOAD Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit: BPF prog-id=48 op=LOAD Oct 2 19:39:06.219000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:39:06.219000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit: BPF prog-id=49 op=LOAD Oct 2 19:39:06.219000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit: BPF prog-id=50 op=LOAD Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.219000 audit: BPF prog-id=51 op=LOAD Oct 2 19:39:06.219000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:39:06.219000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:39:06.230745 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:39:06.234643 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:39:06.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.235318 systemd[1]: Reached target network-online.target. Oct 2 19:39:06.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.236624 systemd[1]: Started kubelet.service. Oct 2 19:39:06.240952 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 2 19:39:06.244390 systemd[1]: Starting coreos-metadata.service... Oct 2 19:39:06.261824 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:39:06.261932 systemd[1]: Finished coreos-metadata.service. Oct 2 19:39:06.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.285123 kubelet[1432]: E1002 19:39:06.285087 1432 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:39:06.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:39:06.286599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:39:06.286669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:39:06.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.510501 systemd[1]: Stopped kubelet.service. Oct 2 19:39:06.520423 systemd[1]: Reloading. Oct 2 19:39:06.558449 /usr/lib/systemd/system-generators/torcx-generator[1502]: time="2023-10-02T19:39:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:06.558637 /usr/lib/systemd/system-generators/torcx-generator[1502]: time="2023-10-02T19:39:06Z" level=info msg="torcx already run" Oct 2 19:39:06.614288 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:06.614305 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:06.627379 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit: BPF prog-id=52 op=LOAD Oct 2 19:39:06.664000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit: BPF prog-id=53 op=LOAD Oct 2 19:39:06.664000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit: BPF prog-id=54 op=LOAD Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit: BPF prog-id=55 op=LOAD Oct 2 19:39:06.664000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:39:06.664000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.664000 audit: BPF prog-id=56 op=LOAD Oct 2 19:39:06.664000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit: BPF prog-id=57 op=LOAD Oct 2 19:39:06.666000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit: BPF prog-id=58 op=LOAD Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.666000 audit: BPF prog-id=59 op=LOAD Oct 2 19:39:06.666000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:39:06.666000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.667000 audit: BPF prog-id=60 op=LOAD Oct 2 19:39:06.667000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit: BPF prog-id=61 op=LOAD Oct 2 19:39:06.668000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit: BPF prog-id=62 op=LOAD Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit: BPF prog-id=63 op=LOAD Oct 2 19:39:06.668000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:39:06.668000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit: BPF prog-id=64 op=LOAD Oct 2 19:39:06.669000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit: BPF prog-id=65 op=LOAD Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.669000 audit: BPF prog-id=66 op=LOAD Oct 2 19:39:06.669000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:39:06.669000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:39:06.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:06.679267 systemd[1]: Started kubelet.service. Oct 2 19:39:06.716939 kubelet[1562]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:39:06.717179 kubelet[1562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:39:06.717224 kubelet[1562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:06.717309 kubelet[1562]: I1002 19:39:06.717286 1562 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:39:06.718173 kubelet[1562]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:39:06.718218 kubelet[1562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:39:06.718255 kubelet[1562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:06.908219 kubelet[1562]: I1002 19:39:06.908202 1562 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:39:06.908329 kubelet[1562]: I1002 19:39:06.908320 1562 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:39:06.908517 kubelet[1562]: I1002 19:39:06.908509 1562 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:39:06.911019 kubelet[1562]: I1002 19:39:06.910998 1562 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:39:06.911211 kubelet[1562]: I1002 19:39:06.911204 1562 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:39:06.911296 kubelet[1562]: I1002 19:39:06.911288 1562 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:39:06.911397 kubelet[1562]: I1002 19:39:06.911390 1562 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:39:06.911444 kubelet[1562]: I1002 19:39:06.911436 1562 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:39:06.911511 kubelet[1562]: I1002 19:39:06.911473 1562 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:39:06.911620 kubelet[1562]: I1002 19:39:06.911613 1562 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:06.913099 kubelet[1562]: I1002 19:39:06.913090 1562 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:39:06.913156 kubelet[1562]: I1002 19:39:06.913149 1562 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:39:06.913207 kubelet[1562]: I1002 19:39:06.913200 1562 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:39:06.913250 kubelet[1562]: I1002 19:39:06.913243 1562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:39:06.913636 kubelet[1562]: E1002 19:39:06.913628 1562 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:06.913730 kubelet[1562]: E1002 19:39:06.913722 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:06.914596 kubelet[1562]: I1002 19:39:06.914587 1562 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:39:06.914803 kubelet[1562]: W1002 19:39:06.914793 1562 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:39:06.915041 kubelet[1562]: I1002 19:39:06.915033 1562 server.go:1175] "Started kubelet" Oct 2 19:39:06.930576 kubelet[1562]: E1002 19:39:06.930486 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f670d1fe0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 915020768, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 915020768, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:06.931054 kubelet[1562]: E1002 19:39:06.931042 1562 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:39:06.931140 kubelet[1562]: E1002 19:39:06.931132 1562 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:39:06.931390 kubelet[1562]: W1002 19:39:06.931380 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:06.931477 kubelet[1562]: E1002 19:39:06.931469 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:06.931562 kubelet[1562]: W1002 19:39:06.931555 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:06.931626 kubelet[1562]: E1002 19:39:06.931619 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:06.931698 kubelet[1562]: I1002 19:39:06.931064 1562 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:39:06.931000 audit[1562]: AVC avc: denied { mac_admin } for pid=1562 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.931000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:06.931000 audit[1562]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ca2de0 a1=c0005bb560 a2=c000ca2db0 a3=25 items=0 ppid=1 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:06.931000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:06.931000 audit[1562]: AVC avc: denied { mac_admin } for pid=1562 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.931000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:06.931000 audit[1562]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008efdc0 a1=c0005bb578 a2=c000ca2e70 a3=25 items=0 ppid=1 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:06.931000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:06.932570 kubelet[1562]: I1002 19:39:06.932277 1562 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:39:06.932570 kubelet[1562]: I1002 19:39:06.932306 1562 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:39:06.932570 kubelet[1562]: I1002 19:39:06.932345 1562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:39:06.932882 kubelet[1562]: I1002 19:39:06.932873 1562 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:39:06.933469 kubelet[1562]: I1002 19:39:06.933455 1562 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:39:06.934177 kubelet[1562]: I1002 19:39:06.934160 1562 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:39:06.935758 kubelet[1562]: E1002 19:39:06.935742 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:06.940721 kubelet[1562]: E1002 19:39:06.939540 1562 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.67.124.141" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:06.940721 kubelet[1562]: E1002 19:39:06.939566 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f6802d22e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 931122734, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 931122734, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:06.940721 kubelet[1562]: W1002 19:39:06.940625 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:06.940934 kubelet[1562]: E1002 19:39:06.940649 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:06.953139 kubelet[1562]: I1002 19:39:06.953088 1562 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:39:06.953139 kubelet[1562]: I1002 19:39:06.953108 1562 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:39:06.953139 kubelet[1562]: I1002 19:39:06.953119 1562 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:06.953624 kubelet[1562]: E1002 19:39:06.953575 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694b9763", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.141 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:06.953992 kubelet[1562]: I1002 19:39:06.953981 1562 policy_none.go:49] "None policy: Start" Oct 2 19:39:06.954438 kubelet[1562]: E1002 19:39:06.954406 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694babb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.141 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:06.956302 kubelet[1562]: E1002 19:39:06.956263 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694bb32c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.141 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:06.956451 kubelet[1562]: I1002 19:39:06.956443 1562 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:39:06.956509 kubelet[1562]: I1002 19:39:06.956502 1562 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:39:06.959344 systemd[1]: Created slice kubepods.slice. Oct 2 19:39:06.966301 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:39:06.970000 audit[1579]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:06.970000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc2b705ee0 a2=0 a3=7ffc2b705ecc items=0 ppid=1562 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:06.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:06.970000 audit[1581]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:06.970000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd333c1d20 a2=0 a3=7ffd333c1d0c items=0 ppid=1562 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:06.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:06.975719 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:39:06.977066 kubelet[1562]: I1002 19:39:06.977053 1562 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:39:06.976000 audit[1562]: AVC avc: denied { mac_admin } for pid=1562 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:06.976000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:06.976000 audit[1562]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008ce780 a1=c000879fe0 a2=c0008ce750 a3=25 items=0 ppid=1 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:06.976000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:06.977608 kubelet[1562]: I1002 19:39:06.977600 1562 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:39:06.977767 kubelet[1562]: I1002 19:39:06.977760 1562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:39:06.978494 kubelet[1562]: E1002 19:39:06.978478 1562 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.141\" not found" Oct 2 19:39:06.983774 kubelet[1562]: E1002 19:39:06.983716 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f6b17063f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 982778431, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 982778431, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:06.972000 audit[1583]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:06.972000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe03b5c970 a2=0 a3=7ffe03b5c95c items=0 ppid=1562 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:06.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:06.985000 audit[1589]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:06.985000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff631c1870 a2=0 a3=7fff631c185c items=0 ppid=1562 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:06.985000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:07.013000 audit[1594]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.013000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcda89d370 a2=0 a3=7ffcda89d35c items=0 ppid=1562 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:39:07.014000 audit[1595]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.014000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc130097f0 a2=0 a3=7ffc130097dc items=0 ppid=1562 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:39:07.017000 audit[1598]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.017000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd3a1e4790 a2=0 a3=7ffd3a1e477c items=0 ppid=1562 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:39:07.020000 audit[1601]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.020000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc79fefc40 a2=0 a3=7ffc79fefc2c items=0 ppid=1562 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:39:07.021000 audit[1602]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.021000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd92b70af0 a2=0 a3=7ffd92b70adc items=0 ppid=1562 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:39:07.021000 audit[1603]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.021000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff786b62e0 a2=0 a3=7fff786b62cc items=0 ppid=1562 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:07.023000 audit[1605]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.023000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffebcb72a90 a2=0 a3=7ffebcb72a7c items=0 ppid=1562 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:39:07.034320 kubelet[1562]: E1002 19:39:07.034303 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.034841 kubelet[1562]: I1002 19:39:07.034832 1562 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.141" Oct 2 19:39:07.035561 kubelet[1562]: E1002 19:39:07.035552 1562 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.141" Oct 2 19:39:07.035793 kubelet[1562]: E1002 19:39:07.035755 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694b9763", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.141 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 34811952, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694b9763" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.036280 kubelet[1562]: E1002 19:39:07.036254 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694babb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.141 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 34815271, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694babb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.036741 kubelet[1562]: E1002 19:39:07.036715 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694bb32c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.141 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 34817075, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694bb32c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.024000 audit[1607]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.024000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc13f873c0 a2=0 a3=7ffc13f873ac items=0 ppid=1562 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:07.041000 audit[1610]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.041000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd490cbdd0 a2=0 a3=7ffd490cbdbc items=0 ppid=1562 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:39:07.043000 audit[1612]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.043000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe21ddd8f0 a2=0 a3=7ffe21ddd8dc items=0 ppid=1562 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:39:07.048000 audit[1615]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.048000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fff45525970 a2=0 a3=7fff4552595c items=0 ppid=1562 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:39:07.049603 kubelet[1562]: I1002 19:39:07.049589 1562 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:39:07.049000 audit[1616]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.049000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffac337740 a2=0 a3=7fffac33772c items=0 ppid=1562 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:07.049000 audit[1617]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.049000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5cf5e980 a2=0 a3=7fff5cf5e96c items=0 ppid=1562 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:07.049000 audit[1618]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.049000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdbac55530 a2=0 a3=7ffdbac5551c items=0 ppid=1562 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:39:07.050000 audit[1619]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.050000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc34e2fe00 a2=0 a3=7ffc34e2fdec items=0 ppid=1562 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:07.050000 audit[1621]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:07.050000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda4a09fd0 a2=0 a3=7ffda4a09fbc items=0 ppid=1562 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:07.050000 audit[1622]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.050000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffee4f18280 a2=0 a3=7ffee4f1826c items=0 ppid=1562 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:39:07.051000 audit[1623]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.051000 audit[1623]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc2df00970 a2=0 a3=7ffc2df0095c items=0 ppid=1562 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:07.052000 audit[1625]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.052000 audit[1625]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffed3792820 a2=0 a3=7ffed379280c items=0 ppid=1562 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:39:07.053000 audit[1626]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1626 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.053000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2fcfef70 a2=0 a3=7ffd2fcfef5c items=0 ppid=1562 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:39:07.053000 audit[1627]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.053000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0076bdb0 a2=0 a3=7fff0076bd9c items=0 ppid=1562 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:07.054000 audit[1629]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.054000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc3772cc90 a2=0 a3=7ffc3772cc7c items=0 ppid=1562 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:39:07.056000 audit[1631]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.056000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc6f548750 a2=0 a3=7ffc6f54873c items=0 ppid=1562 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:07.057000 audit[1633]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.057000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffcd4481d80 a2=0 a3=7ffcd4481d6c items=0 ppid=1562 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:39:07.058000 audit[1635]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.058000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffea3307570 a2=0 a3=7ffea330755c items=0 ppid=1562 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:39:07.060000 audit[1637]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.060000 audit[1637]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fffd2fdf490 a2=0 a3=7fffd2fdf47c items=0 ppid=1562 pid=1637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:39:07.061621 kubelet[1562]: I1002 19:39:07.061612 1562 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:39:07.061675 kubelet[1562]: I1002 19:39:07.061668 1562 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:39:07.061727 kubelet[1562]: I1002 19:39:07.061720 1562 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:39:07.061787 kubelet[1562]: E1002 19:39:07.061781 1562 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:39:07.061000 audit[1638]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.061000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda2b58e90 a2=0 a3=7ffda2b58e7c items=0 ppid=1562 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:07.061000 audit[1639]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.061000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc093bde60 a2=0 a3=7ffc093bde4c items=0 ppid=1562 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:07.062795 kubelet[1562]: W1002 19:39:07.062786 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:07.062845 kubelet[1562]: E1002 19:39:07.062839 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:07.062000 audit[1640]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:07.062000 audit[1640]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3754fc30 a2=0 a3=7ffc3754fc1c items=0 ppid=1562 pid=1640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:07.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:07.134653 kubelet[1562]: E1002 19:39:07.134622 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.140759 kubelet[1562]: E1002 19:39:07.140742 1562 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.67.124.141" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:07.236044 kubelet[1562]: E1002 19:39:07.235135 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.236589 kubelet[1562]: I1002 19:39:07.236573 1562 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.141" Oct 2 19:39:07.237828 kubelet[1562]: E1002 19:39:07.237816 1562 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.141" Oct 2 19:39:07.237969 kubelet[1562]: E1002 19:39:07.237915 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694b9763", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.141 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 236544541, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694b9763" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.238486 kubelet[1562]: E1002 19:39:07.238457 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694babb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.141 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 236548642, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694babb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.330589 kubelet[1562]: E1002 19:39:07.330535 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694bb32c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.141 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 236550240, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694bb32c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.337040 kubelet[1562]: E1002 19:39:07.337027 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.437594 kubelet[1562]: E1002 19:39:07.437569 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.538131 kubelet[1562]: E1002 19:39:07.538064 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.541806 kubelet[1562]: E1002 19:39:07.541791 1562 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.67.124.141" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:07.638356 kubelet[1562]: E1002 19:39:07.638331 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.638969 kubelet[1562]: I1002 19:39:07.638958 1562 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.141" Oct 2 19:39:07.640082 kubelet[1562]: E1002 19:39:07.640064 1562 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.141" Oct 2 19:39:07.640183 kubelet[1562]: E1002 19:39:07.640072 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694b9763", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.141 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 638920775, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694b9763" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.731349 kubelet[1562]: E1002 19:39:07.731277 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694babb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.141 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 638929809, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694babb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.738680 kubelet[1562]: E1002 19:39:07.738662 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.839281 kubelet[1562]: E1002 19:39:07.839206 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.896287 kubelet[1562]: W1002 19:39:07.896258 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:07.896287 kubelet[1562]: E1002 19:39:07.896285 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:07.914543 kubelet[1562]: E1002 19:39:07.914526 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:07.930494 kubelet[1562]: E1002 19:39:07.930429 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694bb32c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.141 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 7, 638932249, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694bb32c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:07.940222 kubelet[1562]: E1002 19:39:07.940208 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:07.999593 kubelet[1562]: W1002 19:39:07.999520 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:07.999593 kubelet[1562]: E1002 19:39:07.999558 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:08.041278 kubelet[1562]: E1002 19:39:08.041237 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.141415 kubelet[1562]: E1002 19:39:08.141374 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.181251 kubelet[1562]: W1002 19:39:08.181214 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:08.181251 kubelet[1562]: E1002 19:39:08.181242 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:08.241687 kubelet[1562]: E1002 19:39:08.241616 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.278709 kubelet[1562]: W1002 19:39:08.278646 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:08.278709 kubelet[1562]: E1002 19:39:08.278674 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:08.342228 kubelet[1562]: E1002 19:39:08.342161 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.342790 kubelet[1562]: E1002 19:39:08.342770 1562 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.67.124.141" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:08.441088 kubelet[1562]: I1002 19:39:08.441008 1562 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.141" Oct 2 19:39:08.442215 kubelet[1562]: E1002 19:39:08.442198 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.442525 kubelet[1562]: E1002 19:39:08.442514 1562 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.141" Oct 2 19:39:08.442588 kubelet[1562]: E1002 19:39:08.442509 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694b9763", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.141 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 8, 440983536, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694b9763" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:08.443288 kubelet[1562]: E1002 19:39:08.443248 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694babb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.141 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 8, 440987185, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694babb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:08.531506 kubelet[1562]: E1002 19:39:08.531431 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694bb32c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.141 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 8, 440989336, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694bb32c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:08.542745 kubelet[1562]: E1002 19:39:08.542728 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.643210 kubelet[1562]: E1002 19:39:08.643169 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.743624 kubelet[1562]: E1002 19:39:08.743545 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.844198 kubelet[1562]: E1002 19:39:08.844092 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:08.915670 kubelet[1562]: E1002 19:39:08.915597 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:08.944202 kubelet[1562]: E1002 19:39:08.944165 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.044825 kubelet[1562]: E1002 19:39:09.044679 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.145059 kubelet[1562]: E1002 19:39:09.145035 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.245375 kubelet[1562]: E1002 19:39:09.245347 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.345875 kubelet[1562]: E1002 19:39:09.345791 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.446573 kubelet[1562]: E1002 19:39:09.446539 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.546940 kubelet[1562]: E1002 19:39:09.546914 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.647310 kubelet[1562]: E1002 19:39:09.647291 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.747695 kubelet[1562]: E1002 19:39:09.747667 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.848213 kubelet[1562]: E1002 19:39:09.848186 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:09.916628 kubelet[1562]: E1002 19:39:09.916550 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:09.944391 kubelet[1562]: E1002 19:39:09.944351 1562 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.67.124.141" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:09.948555 kubelet[1562]: E1002 19:39:09.948537 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.043560 kubelet[1562]: I1002 19:39:10.043534 1562 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.141" Oct 2 19:39:10.044485 kubelet[1562]: E1002 19:39:10.044465 1562 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.141" Oct 2 19:39:10.044646 kubelet[1562]: E1002 19:39:10.044573 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694b9763", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.141 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 10, 43501446, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694b9763" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:10.045282 kubelet[1562]: E1002 19:39:10.045238 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694babb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.141 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 10, 43509587, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694babb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:10.045934 kubelet[1562]: E1002 19:39:10.045893 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694bb32c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.141 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 10, 43511986, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694bb32c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:10.049056 kubelet[1562]: E1002 19:39:10.049038 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.110198 kubelet[1562]: W1002 19:39:10.110176 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:10.110333 kubelet[1562]: E1002 19:39:10.110315 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:10.149548 kubelet[1562]: E1002 19:39:10.149513 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.181607 kubelet[1562]: W1002 19:39:10.181510 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:10.181607 kubelet[1562]: E1002 19:39:10.181531 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:10.249849 kubelet[1562]: E1002 19:39:10.249819 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.252451 update_engine[1137]: I1002 19:39:10.252063 1137 update_attempter.cc:505] Updating boot flags... Oct 2 19:39:10.350927 kubelet[1562]: E1002 19:39:10.350893 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.451850 kubelet[1562]: E1002 19:39:10.451764 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.487844 kubelet[1562]: W1002 19:39:10.487794 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:10.487844 kubelet[1562]: E1002 19:39:10.487818 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:10.537853 kubelet[1562]: W1002 19:39:10.537831 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:10.537961 kubelet[1562]: E1002 19:39:10.537867 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:10.552047 kubelet[1562]: E1002 19:39:10.552032 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.652459 kubelet[1562]: E1002 19:39:10.652427 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.752955 kubelet[1562]: E1002 19:39:10.752868 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.853386 kubelet[1562]: E1002 19:39:10.853357 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:10.916795 kubelet[1562]: E1002 19:39:10.916758 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:10.954113 kubelet[1562]: E1002 19:39:10.954083 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.054935 kubelet[1562]: E1002 19:39:11.054860 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.155344 kubelet[1562]: E1002 19:39:11.155311 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.255854 kubelet[1562]: E1002 19:39:11.255814 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.356407 kubelet[1562]: E1002 19:39:11.356317 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.457068 kubelet[1562]: E1002 19:39:11.457038 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.557666 kubelet[1562]: E1002 19:39:11.557626 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.658247 kubelet[1562]: E1002 19:39:11.658221 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.758906 kubelet[1562]: E1002 19:39:11.758874 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.859304 kubelet[1562]: E1002 19:39:11.859277 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.917876 kubelet[1562]: E1002 19:39:11.917778 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:11.960065 kubelet[1562]: E1002 19:39:11.960038 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:11.978642 kubelet[1562]: E1002 19:39:11.978619 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:12.060378 kubelet[1562]: E1002 19:39:12.060352 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.160658 kubelet[1562]: E1002 19:39:12.160630 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.261049 kubelet[1562]: E1002 19:39:12.260970 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.361512 kubelet[1562]: E1002 19:39:12.361471 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.462067 kubelet[1562]: E1002 19:39:12.462038 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.562597 kubelet[1562]: E1002 19:39:12.562508 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.663098 kubelet[1562]: E1002 19:39:12.663051 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.763482 kubelet[1562]: E1002 19:39:12.763448 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.864074 kubelet[1562]: E1002 19:39:12.863983 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:12.918376 kubelet[1562]: E1002 19:39:12.918299 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:12.964551 kubelet[1562]: E1002 19:39:12.964518 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.065195 kubelet[1562]: E1002 19:39:13.065163 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.145809 kubelet[1562]: E1002 19:39:13.145767 1562 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.67.124.141" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:13.166041 kubelet[1562]: E1002 19:39:13.166019 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.245788 kubelet[1562]: I1002 19:39:13.245728 1562 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.141" Oct 2 19:39:13.246544 kubelet[1562]: E1002 19:39:13.246530 1562 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.141" Oct 2 19:39:13.246727 kubelet[1562]: E1002 19:39:13.246684 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694b9763", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.141 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952669027, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 13, 245686307, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694b9763" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:13.247110 kubelet[1562]: E1002 19:39:13.247079 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694babb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.141 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952674226, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 13, 245690354, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694babb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:13.247476 kubelet[1562]: E1002 19:39:13.247445 1562 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.141.178a619f694bb32c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.141", UID:"10.67.124.141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.141 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.141"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 6, 952676140, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 13, 245692013, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.141.178a619f694bb32c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:13.266831 kubelet[1562]: E1002 19:39:13.266807 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.367264 kubelet[1562]: E1002 19:39:13.367235 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.467982 kubelet[1562]: E1002 19:39:13.467905 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.568490 kubelet[1562]: E1002 19:39:13.568462 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.668925 kubelet[1562]: E1002 19:39:13.668894 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.769444 kubelet[1562]: E1002 19:39:13.769371 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.869886 kubelet[1562]: E1002 19:39:13.869861 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:13.919274 kubelet[1562]: E1002 19:39:13.919245 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:13.969978 kubelet[1562]: E1002 19:39:13.969944 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.070697 kubelet[1562]: E1002 19:39:14.070627 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.107471 kubelet[1562]: W1002 19:39:14.107443 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:14.107471 kubelet[1562]: E1002 19:39:14.107472 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:14.171020 kubelet[1562]: E1002 19:39:14.170974 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.271501 kubelet[1562]: E1002 19:39:14.271467 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.371982 kubelet[1562]: E1002 19:39:14.371903 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.472593 kubelet[1562]: E1002 19:39:14.472537 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.573087 kubelet[1562]: E1002 19:39:14.573046 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.600991 kubelet[1562]: W1002 19:39:14.600964 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:14.600991 kubelet[1562]: E1002 19:39:14.600985 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:14.673434 kubelet[1562]: E1002 19:39:14.673400 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.773887 kubelet[1562]: E1002 19:39:14.773857 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.861042 kubelet[1562]: W1002 19:39:14.861021 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:14.861042 kubelet[1562]: E1002 19:39:14.861039 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:14.874297 kubelet[1562]: E1002 19:39:14.874274 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:14.919669 kubelet[1562]: E1002 19:39:14.919632 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:14.974967 kubelet[1562]: E1002 19:39:14.974901 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.075130 kubelet[1562]: E1002 19:39:15.075107 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.175761 kubelet[1562]: E1002 19:39:15.175732 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.276191 kubelet[1562]: E1002 19:39:15.276121 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.376591 kubelet[1562]: E1002 19:39:15.376563 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.477157 kubelet[1562]: E1002 19:39:15.477131 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.577624 kubelet[1562]: E1002 19:39:15.577554 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.678051 kubelet[1562]: E1002 19:39:15.678022 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.778484 kubelet[1562]: E1002 19:39:15.778456 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.878986 kubelet[1562]: E1002 19:39:15.878920 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:15.920345 kubelet[1562]: E1002 19:39:15.920319 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:15.979449 kubelet[1562]: E1002 19:39:15.979420 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.079804 kubelet[1562]: E1002 19:39:16.079779 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.180262 kubelet[1562]: E1002 19:39:16.180235 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.280683 kubelet[1562]: E1002 19:39:16.280655 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.317616 kubelet[1562]: W1002 19:39:16.317589 1562 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:16.317616 kubelet[1562]: E1002 19:39:16.317614 1562 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:16.381144 kubelet[1562]: E1002 19:39:16.381108 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.481833 kubelet[1562]: E1002 19:39:16.481745 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.582266 kubelet[1562]: E1002 19:39:16.582236 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.682727 kubelet[1562]: E1002 19:39:16.682701 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.783560 kubelet[1562]: E1002 19:39:16.783488 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.883956 kubelet[1562]: E1002 19:39:16.883930 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:16.911222 kubelet[1562]: I1002 19:39:16.911187 1562 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:39:16.921379 kubelet[1562]: E1002 19:39:16.921353 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:16.979702 kubelet[1562]: E1002 19:39:16.979385 1562 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.141\" not found" Oct 2 19:39:16.979702 kubelet[1562]: E1002 19:39:16.979646 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:16.984946 kubelet[1562]: E1002 19:39:16.984931 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.085197 kubelet[1562]: E1002 19:39:17.085119 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.185660 kubelet[1562]: E1002 19:39:17.185630 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.273447 kubelet[1562]: E1002 19:39:17.273419 1562 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.124.141" not found Oct 2 19:39:17.286788 kubelet[1562]: E1002 19:39:17.286757 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.387317 kubelet[1562]: E1002 19:39:17.387193 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.487869 kubelet[1562]: E1002 19:39:17.487839 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.588406 kubelet[1562]: E1002 19:39:17.588374 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.688908 kubelet[1562]: E1002 19:39:17.688881 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.789983 kubelet[1562]: E1002 19:39:17.789954 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.890650 kubelet[1562]: E1002 19:39:17.890574 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:17.921936 kubelet[1562]: E1002 19:39:17.921902 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:17.991193 kubelet[1562]: E1002 19:39:17.991123 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.091568 kubelet[1562]: E1002 19:39:18.091537 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.192064 kubelet[1562]: E1002 19:39:18.192038 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.292146 kubelet[1562]: E1002 19:39:18.292077 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.333568 kubelet[1562]: E1002 19:39:18.333547 1562 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.124.141" not found Oct 2 19:39:18.392981 kubelet[1562]: E1002 19:39:18.392951 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.493849 kubelet[1562]: E1002 19:39:18.493819 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.593959 kubelet[1562]: E1002 19:39:18.593891 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.694404 kubelet[1562]: E1002 19:39:18.694372 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.795458 kubelet[1562]: E1002 19:39:18.795430 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.895994 kubelet[1562]: E1002 19:39:18.895968 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:18.922293 kubelet[1562]: E1002 19:39:18.922269 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:18.996563 kubelet[1562]: E1002 19:39:18.996535 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.097078 kubelet[1562]: E1002 19:39:19.097050 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.198117 kubelet[1562]: E1002 19:39:19.198030 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.298665 kubelet[1562]: E1002 19:39:19.298632 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.398736 kubelet[1562]: E1002 19:39:19.398710 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.499433 kubelet[1562]: E1002 19:39:19.499352 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.549164 kubelet[1562]: E1002 19:39:19.549131 1562 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.124.141\" not found" node="10.67.124.141" Oct 2 19:39:19.600435 kubelet[1562]: E1002 19:39:19.600405 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.647303 kubelet[1562]: I1002 19:39:19.647226 1562 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.141" Oct 2 19:39:19.701274 kubelet[1562]: E1002 19:39:19.701214 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.735104 kubelet[1562]: I1002 19:39:19.735071 1562 kubelet_node_status.go:73] "Successfully registered node" node="10.67.124.141" Oct 2 19:39:19.802321 kubelet[1562]: E1002 19:39:19.802036 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.902602 kubelet[1562]: E1002 19:39:19.902568 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:19.923159 kubelet[1562]: E1002 19:39:19.923133 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:20.003448 kubelet[1562]: E1002 19:39:20.003419 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.104247 kubelet[1562]: E1002 19:39:20.103989 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.193184 sudo[1343]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:20.198186 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 19:39:20.198235 kernel: audit: type=1106 audit(1696275560.192:577): pid=1343 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:20.198257 kernel: audit: type=1104 audit(1696275560.192:578): pid=1343 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:20.192000 audit[1343]: USER_END pid=1343 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:20.192000 audit[1343]: CRED_DISP pid=1343 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:20.201326 sshd[1340]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:20.201000 audit[1340]: USER_END pid=1340 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:20.203482 systemd[1]: sshd@6-139.178.70.109:22-86.109.11.97:55404.service: Deactivated successfully. Oct 2 19:39:20.204135 systemd[1]: session-9.scope: Deactivated successfully. Oct 2 19:39:20.204876 kubelet[1562]: E1002 19:39:20.204864 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.207038 kernel: audit: type=1106 audit(1696275560.201:579): pid=1340 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:20.207214 kernel: audit: type=1104 audit(1696275560.201:580): pid=1340 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:20.201000 audit[1340]: CRED_DISP pid=1340 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=86.109.11.97 addr=86.109.11.97 terminal=ssh res=success' Oct 2 19:39:20.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.109:22-86.109.11.97:55404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:20.213101 kernel: audit: type=1131 audit(1696275560.202:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.109:22-86.109.11.97:55404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:20.213212 systemd-logind[1134]: Session 9 logged out. Waiting for processes to exit. Oct 2 19:39:20.213706 systemd-logind[1134]: Removed session 9. Oct 2 19:39:20.305443 kubelet[1562]: E1002 19:39:20.305422 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.405920 kubelet[1562]: E1002 19:39:20.405892 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.506745 kubelet[1562]: E1002 19:39:20.506710 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.607218 kubelet[1562]: E1002 19:39:20.607194 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.708272 kubelet[1562]: E1002 19:39:20.708157 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.808737 kubelet[1562]: E1002 19:39:20.808683 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.909292 kubelet[1562]: E1002 19:39:20.909257 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:20.923674 kubelet[1562]: E1002 19:39:20.923650 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:21.010412 kubelet[1562]: E1002 19:39:21.010326 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.111024 kubelet[1562]: E1002 19:39:21.110975 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.211619 kubelet[1562]: E1002 19:39:21.211579 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.312238 kubelet[1562]: E1002 19:39:21.312144 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.413025 kubelet[1562]: E1002 19:39:21.412981 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.513634 kubelet[1562]: E1002 19:39:21.513596 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.614327 kubelet[1562]: E1002 19:39:21.614244 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.714830 kubelet[1562]: E1002 19:39:21.714805 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.815542 kubelet[1562]: E1002 19:39:21.815516 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.916279 kubelet[1562]: E1002 19:39:21.916253 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:21.924412 kubelet[1562]: E1002 19:39:21.924398 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:21.980202 kubelet[1562]: E1002 19:39:21.980179 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:22.017004 kubelet[1562]: E1002 19:39:22.016972 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.117455 kubelet[1562]: E1002 19:39:22.117415 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.218083 kubelet[1562]: E1002 19:39:22.217973 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.318618 kubelet[1562]: E1002 19:39:22.318574 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.419437 kubelet[1562]: E1002 19:39:22.419399 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.520232 kubelet[1562]: E1002 19:39:22.520139 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.620913 kubelet[1562]: E1002 19:39:22.620887 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.721551 kubelet[1562]: E1002 19:39:22.721526 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.822102 kubelet[1562]: E1002 19:39:22.822035 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.922765 kubelet[1562]: E1002 19:39:22.922738 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:22.924914 kubelet[1562]: E1002 19:39:22.924889 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:23.023558 kubelet[1562]: E1002 19:39:23.023530 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.124103 kubelet[1562]: E1002 19:39:23.124005 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.224546 kubelet[1562]: E1002 19:39:23.224521 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.325062 kubelet[1562]: E1002 19:39:23.325038 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.425682 kubelet[1562]: E1002 19:39:23.425659 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.526225 kubelet[1562]: E1002 19:39:23.526194 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.626325 kubelet[1562]: E1002 19:39:23.626274 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.727119 kubelet[1562]: E1002 19:39:23.727038 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.827497 kubelet[1562]: E1002 19:39:23.827468 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:23.925151 kubelet[1562]: E1002 19:39:23.925124 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:23.928360 kubelet[1562]: E1002 19:39:23.928340 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.029168 kubelet[1562]: E1002 19:39:24.029092 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.129641 kubelet[1562]: E1002 19:39:24.129610 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.230183 kubelet[1562]: E1002 19:39:24.230151 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.331130 kubelet[1562]: E1002 19:39:24.331049 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.431876 kubelet[1562]: E1002 19:39:24.431847 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.532434 kubelet[1562]: E1002 19:39:24.532408 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.633051 kubelet[1562]: E1002 19:39:24.632966 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.733494 kubelet[1562]: E1002 19:39:24.733467 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.833925 kubelet[1562]: E1002 19:39:24.833899 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:24.926284 kubelet[1562]: E1002 19:39:24.926261 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:24.934618 kubelet[1562]: E1002 19:39:24.934603 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.035139 kubelet[1562]: E1002 19:39:25.035109 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.135361 kubelet[1562]: E1002 19:39:25.135333 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.235908 kubelet[1562]: E1002 19:39:25.235830 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.336369 kubelet[1562]: E1002 19:39:25.336346 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.437033 kubelet[1562]: E1002 19:39:25.436990 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.537576 kubelet[1562]: E1002 19:39:25.537502 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.638071 kubelet[1562]: E1002 19:39:25.638039 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.738539 kubelet[1562]: E1002 19:39:25.738516 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.839098 kubelet[1562]: E1002 19:39:25.839020 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:25.926482 kubelet[1562]: E1002 19:39:25.926456 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:25.939839 kubelet[1562]: E1002 19:39:25.939815 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:26.040956 kubelet[1562]: E1002 19:39:26.040926 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:26.141397 kubelet[1562]: E1002 19:39:26.141377 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:26.241913 kubelet[1562]: E1002 19:39:26.241885 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:26.342474 kubelet[1562]: E1002 19:39:26.342444 1562 kubelet.go:2448] "Error getting node" err="node \"10.67.124.141\" not found" Oct 2 19:39:26.443060 kubelet[1562]: I1002 19:39:26.442855 1562 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:39:26.443370 env[1150]: time="2023-10-02T19:39:26.443342651Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:39:26.443557 kubelet[1562]: I1002 19:39:26.443471 1562 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:39:26.443738 kubelet[1562]: E1002 19:39:26.443727 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:26.913558 kubelet[1562]: E1002 19:39:26.913532 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:26.925684 kubelet[1562]: I1002 19:39:26.925669 1562 apiserver.go:52] "Watching apiserver" Oct 2 19:39:26.926798 kubelet[1562]: E1002 19:39:26.926788 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:26.927488 kubelet[1562]: I1002 19:39:26.927474 1562 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:39:26.927572 kubelet[1562]: I1002 19:39:26.927559 1562 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:39:26.930987 systemd[1]: Created slice kubepods-besteffort-podbf1a6a4b_41f1_42c4_9b9e_7ab8b65c7094.slice. Oct 2 19:39:26.937158 systemd[1]: Created slice kubepods-burstable-podcd7aec30_deb4_4fea_8195_316940ef2335.slice. Oct 2 19:39:26.940224 kubelet[1562]: I1002 19:39:26.940207 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-bpf-maps\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940333 kubelet[1562]: I1002 19:39:26.940325 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-lib-modules\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940390 kubelet[1562]: I1002 19:39:26.940383 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-kernel\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940449 kubelet[1562]: I1002 19:39:26.940442 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-hubble-tls\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940507 kubelet[1562]: I1002 19:39:26.940500 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094-kube-proxy\") pod \"kube-proxy-tp7bw\" (UID: \"bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094\") " pod="kube-system/kube-proxy-tp7bw" Oct 2 19:39:26.940559 kubelet[1562]: I1002 19:39:26.940553 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094-xtables-lock\") pod \"kube-proxy-tp7bw\" (UID: \"bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094\") " pod="kube-system/kube-proxy-tp7bw" Oct 2 19:39:26.940617 kubelet[1562]: I1002 19:39:26.940610 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-etc-cni-netd\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940671 kubelet[1562]: I1002 19:39:26.940665 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd7aec30-deb4-4fea-8195-316940ef2335-clustermesh-secrets\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940723 kubelet[1562]: I1002 19:39:26.940717 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4lwq\" (UniqueName: \"kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-kube-api-access-x4lwq\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940778 kubelet[1562]: I1002 19:39:26.940772 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-run\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940832 kubelet[1562]: I1002 19:39:26.940825 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-hostproc\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940885 kubelet[1562]: I1002 19:39:26.940878 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-cgroup\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940937 kubelet[1562]: I1002 19:39:26.940931 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-xtables-lock\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.940991 kubelet[1562]: I1002 19:39:26.940984 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-net\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.941054 kubelet[1562]: I1002 19:39:26.941048 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094-lib-modules\") pod \"kube-proxy-tp7bw\" (UID: \"bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094\") " pod="kube-system/kube-proxy-tp7bw" Oct 2 19:39:26.941246 kubelet[1562]: I1002 19:39:26.941231 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cni-path\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.941330 kubelet[1562]: I1002 19:39:26.941322 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-config-path\") pod \"cilium-cgbrl\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " pod="kube-system/cilium-cgbrl" Oct 2 19:39:26.941409 kubelet[1562]: I1002 19:39:26.941396 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9ng7\" (UniqueName: \"kubernetes.io/projected/bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094-kube-api-access-c9ng7\") pod \"kube-proxy-tp7bw\" (UID: \"bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094\") " pod="kube-system/kube-proxy-tp7bw" Oct 2 19:39:26.941796 kubelet[1562]: I1002 19:39:26.941785 1562 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:39:26.980968 kubelet[1562]: E1002 19:39:26.980951 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:27.247598 env[1150]: time="2023-10-02T19:39:27.245035833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgbrl,Uid:cd7aec30-deb4-4fea-8195-316940ef2335,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:27.536647 env[1150]: time="2023-10-02T19:39:27.536520284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tp7bw,Uid:bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:27.927168 kubelet[1562]: E1002 19:39:27.927128 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:27.964202 env[1150]: time="2023-10-02T19:39:27.964181550Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.964722 env[1150]: time="2023-10-02T19:39:27.964708260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.965363 env[1150]: time="2023-10-02T19:39:27.965349033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.966071 env[1150]: time="2023-10-02T19:39:27.966054469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.968373 env[1150]: time="2023-10-02T19:39:27.968356697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.969031 env[1150]: time="2023-10-02T19:39:27.969019531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.970311 env[1150]: time="2023-10-02T19:39:27.970256280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.970645 env[1150]: time="2023-10-02T19:39:27.970623953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978362042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978384197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978399702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978451823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462 pid=1676 runtime=io.containerd.runc.v2 Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978365092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978384109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978390905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:27.985264 env[1150]: time="2023-10-02T19:39:27.978451684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b22fd76af68b8e27f0a909c262f078f85a4c2e8b3eb359431869752707fceee pid=1680 runtime=io.containerd.runc.v2 Oct 2 19:39:27.995438 systemd[1]: Started cri-containerd-4b22fd76af68b8e27f0a909c262f078f85a4c2e8b3eb359431869752707fceee.scope. Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.014068 kernel: audit: type=1400 audit(1696275568.008:582): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.014110 kernel: audit: type=1400 audit(1696275568.008:583): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.014127 kernel: audit: type=1400 audit(1696275568.008:584): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.020042 kernel: audit: type=1400 audit(1696275568.008:585): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.027235 kernel: audit: type=1400 audit(1696275568.008:586): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.027288 kernel: audit: type=1400 audit(1696275568.008:587): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.031729 kernel: audit: type=1400 audit(1696275568.008:588): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.031768 kernel: audit: type=1400 audit(1696275568.008:589): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.031597 systemd[1]: Started cri-containerd-d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462.scope. Oct 2 19:39:28.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.033981 env[1150]: time="2023-10-02T19:39:28.033962550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tp7bw,Uid:bf1a6a4b-41f1-42c4-9b9e-7ab8b65c7094,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b22fd76af68b8e27f0a909c262f078f85a4c2e8b3eb359431869752707fceee\"" Oct 2 19:39:28.035380 kernel: audit: type=1400 audit(1696275568.008:590): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.035904 env[1150]: time="2023-10-02T19:39:28.035889140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:39:28.038027 kernel: audit: type=1400 audit(1696275568.010:591): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.010000 audit: BPF prog-id=67 op=LOAD Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1680 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462323266643736616636386238653237663061393039633236326630 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1680 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462323266643736616636386238653237663061393039633236326630 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit: BPF prog-id=68 op=LOAD Oct 2 19:39:28.015000 audit[1699]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0001df710 items=0 ppid=1680 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462323266643736616636386238653237663061393039633236326630 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit: BPF prog-id=69 op=LOAD Oct 2 19:39:28.015000 audit[1699]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0001df758 items=0 ppid=1680 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462323266643736616636386238653237663061393039633236326630 Oct 2 19:39:28.015000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:39:28.015000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { perfmon } for pid=1699 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit[1699]: AVC avc: denied { bpf } for pid=1699 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.015000 audit: BPF prog-id=70 op=LOAD Oct 2 19:39:28.015000 audit[1699]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0001dfb68 items=0 ppid=1680 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462323266643736616636386238653237663061393039633236326630 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit: BPF prog-id=71 op=LOAD Oct 2 19:39:28.042000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.042000 audit[1700]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1676 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316137613364663136633665396332356132666134373662643166 Oct 2 19:39:28.043000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.043000 audit[1700]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1676 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316137613364663136633665396332356132666134373662643166 Oct 2 19:39:28.046480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749506219.mount: Deactivated successfully. Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit: BPF prog-id=72 op=LOAD Oct 2 19:39:28.046000 audit[1700]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000305150 items=0 ppid=1676 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316137613364663136633665396332356132666134373662643166 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit: BPF prog-id=73 op=LOAD Oct 2 19:39:28.046000 audit[1700]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000305198 items=0 ppid=1676 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316137613364663136633665396332356132666134373662643166 Oct 2 19:39:28.046000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:39:28.046000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:28.046000 audit: BPF prog-id=74 op=LOAD Oct 2 19:39:28.046000 audit[1700]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003055a8 items=0 ppid=1676 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:28.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316137613364663136633665396332356132666134373662643166 Oct 2 19:39:28.054910 env[1150]: time="2023-10-02T19:39:28.054883180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgbrl,Uid:cd7aec30-deb4-4fea-8195-316940ef2335,Namespace:kube-system,Attempt:0,} returns sandbox id \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\"" Oct 2 19:39:28.928225 kubelet[1562]: E1002 19:39:28.928194 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:28.998970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519912944.mount: Deactivated successfully. Oct 2 19:39:29.352879 env[1150]: time="2023-10-02T19:39:29.352804542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:29.353647 env[1150]: time="2023-10-02T19:39:29.353631208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:29.354400 env[1150]: time="2023-10-02T19:39:29.354388054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:29.354951 env[1150]: time="2023-10-02T19:39:29.354930277Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:29.355553 env[1150]: time="2023-10-02T19:39:29.355529290Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:39:29.356329 env[1150]: time="2023-10-02T19:39:29.356308544Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:39:29.357251 env[1150]: time="2023-10-02T19:39:29.357233259Z" level=info msg="CreateContainer within sandbox \"4b22fd76af68b8e27f0a909c262f078f85a4c2e8b3eb359431869752707fceee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:39:29.363115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204764213.mount: Deactivated successfully. Oct 2 19:39:29.365693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572746339.mount: Deactivated successfully. Oct 2 19:39:29.367974 env[1150]: time="2023-10-02T19:39:29.367954932Z" level=info msg="CreateContainer within sandbox \"4b22fd76af68b8e27f0a909c262f078f85a4c2e8b3eb359431869752707fceee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1628512c6ff4337774ef4009e935b041f63aef3e7931081240167da71435b633\"" Oct 2 19:39:29.368484 env[1150]: time="2023-10-02T19:39:29.368471445Z" level=info msg="StartContainer for \"1628512c6ff4337774ef4009e935b041f63aef3e7931081240167da71435b633\"" Oct 2 19:39:29.382827 systemd[1]: Started cri-containerd-1628512c6ff4337774ef4009e935b041f63aef3e7931081240167da71435b633.scope. Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=1680 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323835313263366666343333373737346566343030396539333562 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.393000 audit: BPF prog-id=75 op=LOAD Oct 2 19:39:29.393000 audit[1754]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c0003aa520 items=0 ppid=1680 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323835313263366666343333373737346566343030396539333562 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit: BPF prog-id=76 op=LOAD Oct 2 19:39:29.394000 audit[1754]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0003aa568 items=0 ppid=1680 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323835313263366666343333373737346566343030396539333562 Oct 2 19:39:29.394000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:39:29.394000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { perfmon } for pid=1754 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit[1754]: AVC avc: denied { bpf } for pid=1754 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:29.394000 audit: BPF prog-id=77 op=LOAD Oct 2 19:39:29.394000 audit[1754]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c0003aa5f8 items=0 ppid=1680 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323835313263366666343333373737346566343030396539333562 Oct 2 19:39:29.403370 env[1150]: time="2023-10-02T19:39:29.403339857Z" level=info msg="StartContainer for \"1628512c6ff4337774ef4009e935b041f63aef3e7931081240167da71435b633\" returns successfully" Oct 2 19:39:29.432691 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:39:29.432770 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:39:29.432790 kernel: IPVS: ipvs loaded. Oct 2 19:39:29.440029 kernel: IPVS: [rr] scheduler registered. Oct 2 19:39:29.445026 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:39:29.449031 kernel: IPVS: [sh] scheduler registered. Oct 2 19:39:29.471000 audit[1815]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.471000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3a13a6f0 a2=0 a3=7ffe3a13a6dc items=0 ppid=1765 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:39:29.471000 audit[1816]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.471000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbe092f00 a2=0 a3=7fffbe092eec items=0 ppid=1765 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:39:29.472000 audit[1817]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.472000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff24e90e50 a2=0 a3=7fff24e90e3c items=0 ppid=1765 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:39:29.472000 audit[1818]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.472000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf0021650 a2=0 a3=7ffdf002163c items=0 ppid=1765 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:39:29.473000 audit[1819]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.473000 audit[1819]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcad66ec60 a2=0 a3=7ffcad66ec4c items=0 ppid=1765 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:39:29.475000 audit[1820]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.475000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdab90bfc0 a2=0 a3=7ffdab90bfac items=0 ppid=1765 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:39:29.571000 audit[1821]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.571000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd2c2bf310 a2=0 a3=7ffd2c2bf2fc items=0 ppid=1765 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:39:29.573000 audit[1823]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.573000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc18ff1020 a2=0 a3=7ffc18ff100c items=0 ppid=1765 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:39:29.575000 audit[1826]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.575000 audit[1826]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff7fa85450 a2=0 a3=7fff7fa8543c items=0 ppid=1765 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:39:29.576000 audit[1827]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.576000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3eda02a0 a2=0 a3=7ffe3eda028c items=0 ppid=1765 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:39:29.577000 audit[1829]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.577000 audit[1829]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2303a5c0 a2=0 a3=7ffe2303a5ac items=0 ppid=1765 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:39:29.578000 audit[1830]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.578000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9b46b4e0 a2=0 a3=7fff9b46b4cc items=0 ppid=1765 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:39:29.579000 audit[1832]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.579000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb6ace580 a2=0 a3=7ffeb6ace56c items=0 ppid=1765 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:39:29.581000 audit[1835]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.581000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeef31e600 a2=0 a3=7ffeef31e5ec items=0 ppid=1765 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:39:29.582000 audit[1836]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.582000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff85ba76a0 a2=0 a3=7fff85ba768c items=0 ppid=1765 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:39:29.584000 audit[1838]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.584000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdcc311060 a2=0 a3=7ffdcc31104c items=0 ppid=1765 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:39:29.584000 audit[1839]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.584000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd8a323c0 a2=0 a3=7ffdd8a323ac items=0 ppid=1765 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:39:29.586000 audit[1841]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.586000 audit[1841]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd125517d0 a2=0 a3=7ffd125517bc items=0 ppid=1765 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:29.589000 audit[1844]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.589000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdad0d4a0 a2=0 a3=7ffcdad0d48c items=0 ppid=1765 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:29.591000 audit[1847]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.591000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff47821ed0 a2=0 a3=7fff47821ebc items=0 ppid=1765 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:39:29.592000 audit[1848]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.592000 audit[1848]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5b4c5310 a2=0 a3=7ffc5b4c52fc items=0 ppid=1765 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:39:29.593000 audit[1850]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.593000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffdf0ff850 a2=0 a3=7fffdf0ff83c items=0 ppid=1765 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:29.595000 audit[1853]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:29.595000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc78d5e920 a2=0 a3=7ffc78d5e90c items=0 ppid=1765 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:29.616000 audit[1857]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:39:29.616000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff9efdf660 a2=0 a3=7fff9efdf64c items=0 ppid=1765 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:29.638000 audit[1857]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:39:29.638000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff9efdf660 a2=0 a3=7fff9efdf64c items=0 ppid=1765 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:29.642000 audit[1861]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.642000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc36dacc80 a2=0 a3=7ffc36dacc6c items=0 ppid=1765 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:39:29.644000 audit[1863]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.644000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffd5a0a3a0 a2=0 a3=7fffd5a0a38c items=0 ppid=1765 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:39:29.647000 audit[1866]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.647000 audit[1866]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc121c6950 a2=0 a3=7ffc121c693c items=0 ppid=1765 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:39:29.648000 audit[1867]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.648000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc2326760 a2=0 a3=7fffc232674c items=0 ppid=1765 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.648000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:39:29.649000 audit[1869]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.649000 audit[1869]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd16194bd0 a2=0 a3=7ffd16194bbc items=0 ppid=1765 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:39:29.650000 audit[1870]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.650000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe68d4d240 a2=0 a3=7ffe68d4d22c items=0 ppid=1765 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:39:29.652000 audit[1872]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.652000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc2fbf5a10 a2=0 a3=7ffc2fbf59fc items=0 ppid=1765 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.652000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:39:29.654000 audit[1875]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.654000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd85f9b2e0 a2=0 a3=7ffd85f9b2cc items=0 ppid=1765 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:39:29.655000 audit[1876]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.655000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5094c640 a2=0 a3=7ffe5094c62c items=0 ppid=1765 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:39:29.657000 audit[1878]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1878 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.657000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc90803390 a2=0 a3=7ffc9080337c items=0 ppid=1765 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:39:29.660000 audit[1879]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.660000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdb1561d0 a2=0 a3=7ffcdb1561bc items=0 ppid=1765 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:39:29.662000 audit[1881]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.662000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd95bab3e0 a2=0 a3=7ffd95bab3cc items=0 ppid=1765 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:29.664000 audit[1884]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.664000 audit[1884]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc24535250 a2=0 a3=7ffc2453523c items=0 ppid=1765 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.664000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:39:29.667000 audit[1887]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.667000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1bc62050 a2=0 a3=7fff1bc6203c items=0 ppid=1765 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:39:29.668000 audit[1888]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.668000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea048df50 a2=0 a3=7ffea048df3c items=0 ppid=1765 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.668000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:39:29.670000 audit[1890]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.670000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcbd1ad3b0 a2=0 a3=7ffcbd1ad39c items=0 ppid=1765 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:29.672000 audit[1893]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:29.672000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc50c8a4a0 a2=0 a3=7ffc50c8a48c items=0 ppid=1765 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:29.676000 audit[1897]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:39:29.676000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffeda108410 a2=0 a3=7ffeda1083fc items=0 ppid=1765 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.676000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:29.677000 audit[1897]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:39:29.677000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffeda108410 a2=0 a3=7ffeda1083fc items=0 ppid=1765 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:29.677000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:29.929040 kubelet[1562]: E1002 19:39:29.928999 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:30.930051 kubelet[1562]: E1002 19:39:30.930020 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:31.930154 kubelet[1562]: E1002 19:39:31.930111 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:31.982007 kubelet[1562]: E1002 19:39:31.981985 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:32.930241 kubelet[1562]: E1002 19:39:32.930215 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:33.931234 kubelet[1562]: E1002 19:39:33.931201 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:34.572786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361393748.mount: Deactivated successfully. Oct 2 19:39:34.932149 kubelet[1562]: E1002 19:39:34.932111 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:35.932438 kubelet[1562]: E1002 19:39:35.932413 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:36.932549 kubelet[1562]: E1002 19:39:36.932509 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:36.983082 kubelet[1562]: E1002 19:39:36.983060 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:37.068215 env[1150]: time="2023-10-02T19:39:37.068188117Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:37.068905 env[1150]: time="2023-10-02T19:39:37.068887367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:37.069757 env[1150]: time="2023-10-02T19:39:37.069743830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:37.070131 env[1150]: time="2023-10-02T19:39:37.070117362Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 19:39:37.071363 env[1150]: time="2023-10-02T19:39:37.071349121Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:39:37.075688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146868755.mount: Deactivated successfully. Oct 2 19:39:37.078408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661363906.mount: Deactivated successfully. Oct 2 19:39:37.085704 env[1150]: time="2023-10-02T19:39:37.085679162Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\"" Oct 2 19:39:37.086146 env[1150]: time="2023-10-02T19:39:37.086125918Z" level=info msg="StartContainer for \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\"" Oct 2 19:39:37.100772 systemd[1]: Started cri-containerd-0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875.scope. Oct 2 19:39:37.109095 systemd[1]: cri-containerd-0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875.scope: Deactivated successfully. Oct 2 19:39:37.109259 systemd[1]: Stopped cri-containerd-0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875.scope. Oct 2 19:39:37.618295 env[1150]: time="2023-10-02T19:39:37.618257687Z" level=info msg="shim disconnected" id=0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875 Oct 2 19:39:37.618538 env[1150]: time="2023-10-02T19:39:37.618523024Z" level=warning msg="cleaning up after shim disconnected" id=0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875 namespace=k8s.io Oct 2 19:39:37.618618 env[1150]: time="2023-10-02T19:39:37.618605198Z" level=info msg="cleaning up dead shim" Oct 2 19:39:37.624924 env[1150]: time="2023-10-02T19:39:37.624886756Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1923 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:37.625140 env[1150]: time="2023-10-02T19:39:37.625068430Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:39:37.628053 env[1150]: time="2023-10-02T19:39:37.628032530Z" level=error msg="Failed to pipe stdout of container \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\"" error="reading from a closed fifo" Oct 2 19:39:37.628145 env[1150]: time="2023-10-02T19:39:37.628129769Z" level=error msg="Failed to pipe stderr of container \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\"" error="reading from a closed fifo" Oct 2 19:39:37.628714 env[1150]: time="2023-10-02T19:39:37.628689989Z" level=error msg="StartContainer for \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:37.629233 kubelet[1562]: E1002 19:39:37.628927 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875" Oct 2 19:39:37.629233 kubelet[1562]: E1002 19:39:37.629038 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:37.629233 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:37.629233 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:39:37.629355 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x4lwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:37.629416 kubelet[1562]: E1002 19:39:37.629062 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:39:37.933557 kubelet[1562]: E1002 19:39:37.933525 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:38.074827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875-rootfs.mount: Deactivated successfully. Oct 2 19:39:38.101902 env[1150]: time="2023-10-02T19:39:38.101870080Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:39:38.137122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193004481.mount: Deactivated successfully. Oct 2 19:39:38.139534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73058506.mount: Deactivated successfully. Oct 2 19:39:38.157866 env[1150]: time="2023-10-02T19:39:38.157822957Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\"" Oct 2 19:39:38.158212 env[1150]: time="2023-10-02T19:39:38.158198358Z" level=info msg="StartContainer for \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\"" Oct 2 19:39:38.168239 systemd[1]: Started cri-containerd-8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5.scope. Oct 2 19:39:38.176438 systemd[1]: cri-containerd-8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5.scope: Deactivated successfully. Oct 2 19:39:38.176619 systemd[1]: Stopped cri-containerd-8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5.scope. Oct 2 19:39:38.180938 env[1150]: time="2023-10-02T19:39:38.180901029Z" level=info msg="shim disconnected" id=8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5 Oct 2 19:39:38.180938 env[1150]: time="2023-10-02T19:39:38.180935233Z" level=warning msg="cleaning up after shim disconnected" id=8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5 namespace=k8s.io Oct 2 19:39:38.180938 env[1150]: time="2023-10-02T19:39:38.180941543Z" level=info msg="cleaning up dead shim" Oct 2 19:39:38.186028 env[1150]: time="2023-10-02T19:39:38.185536003Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1960 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:38.186307 env[1150]: time="2023-10-02T19:39:38.186273788Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:39:38.187230 env[1150]: time="2023-10-02T19:39:38.187206164Z" level=error msg="Failed to pipe stdout of container \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\"" error="reading from a closed fifo" Oct 2 19:39:38.188142 env[1150]: time="2023-10-02T19:39:38.188121756Z" level=error msg="Failed to pipe stderr of container \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\"" error="reading from a closed fifo" Oct 2 19:39:38.188843 env[1150]: time="2023-10-02T19:39:38.188819021Z" level=error msg="StartContainer for \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:38.188978 kubelet[1562]: E1002 19:39:38.188958 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5" Oct 2 19:39:38.189073 kubelet[1562]: E1002 19:39:38.189060 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:38.189073 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:38.189073 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:39:38.189073 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x4lwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:38.189210 kubelet[1562]: E1002 19:39:38.189088 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:39:38.934240 kubelet[1562]: E1002 19:39:38.934206 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:39.102062 kubelet[1562]: I1002 19:39:39.102048 1562 scope.go:115] "RemoveContainer" containerID="0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875" Oct 2 19:39:39.102344 kubelet[1562]: I1002 19:39:39.102337 1562 scope.go:115] "RemoveContainer" containerID="0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875" Oct 2 19:39:39.103179 env[1150]: time="2023-10-02T19:39:39.103156986Z" level=info msg="RemoveContainer for \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\"" Oct 2 19:39:39.104054 env[1150]: time="2023-10-02T19:39:39.104007619Z" level=info msg="RemoveContainer for \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\"" Oct 2 19:39:39.104109 env[1150]: time="2023-10-02T19:39:39.104086583Z" level=error msg="RemoveContainer for \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\" failed" error="failed to set removing state for container \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\": container is already in removing state" Oct 2 19:39:39.104250 kubelet[1562]: E1002 19:39:39.104241 1562 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\": container is already in removing state" containerID="0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875" Oct 2 19:39:39.104333 kubelet[1562]: E1002 19:39:39.104325 1562 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875": container is already in removing state; Skipping pod "cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)" Oct 2 19:39:39.104535 kubelet[1562]: E1002 19:39:39.104521 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:39:39.105577 env[1150]: time="2023-10-02T19:39:39.105552388Z" level=info msg="RemoveContainer for \"0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875\" returns successfully" Oct 2 19:39:39.935279 kubelet[1562]: E1002 19:39:39.935257 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:40.103480 kubelet[1562]: E1002 19:39:40.103456 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:39:40.722754 kubelet[1562]: W1002 19:39:40.722698 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7aec30_deb4_4fea_8195_316940ef2335.slice/cri-containerd-0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875.scope WatchSource:0}: container "0d47e2955eaac92a72c3def3d8eb7f83b90008cd4fd0ecd9fc3d48152c7b7875" in namespace "k8s.io": not found Oct 2 19:39:40.936222 kubelet[1562]: E1002 19:39:40.936193 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:41.937295 kubelet[1562]: E1002 19:39:41.937263 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:41.983737 kubelet[1562]: E1002 19:39:41.983723 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:42.937854 kubelet[1562]: E1002 19:39:42.937821 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:43.827584 kubelet[1562]: W1002 19:39:43.827559 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7aec30_deb4_4fea_8195_316940ef2335.slice/cri-containerd-8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5.scope WatchSource:0}: task 8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5 not found: not found Oct 2 19:39:43.938363 kubelet[1562]: E1002 19:39:43.938340 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:44.938910 kubelet[1562]: E1002 19:39:44.938885 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:45.939464 kubelet[1562]: E1002 19:39:45.939432 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:46.914138 kubelet[1562]: E1002 19:39:46.914114 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:46.940506 kubelet[1562]: E1002 19:39:46.940483 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:46.985051 kubelet[1562]: E1002 19:39:46.985025 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:47.941673 kubelet[1562]: E1002 19:39:47.941643 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:48.941719 kubelet[1562]: E1002 19:39:48.941693 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:49.942691 kubelet[1562]: E1002 19:39:49.942626 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:50.943130 kubelet[1562]: E1002 19:39:50.943107 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:51.944145 kubelet[1562]: E1002 19:39:51.944077 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:51.986156 kubelet[1562]: E1002 19:39:51.986139 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:52.944484 kubelet[1562]: E1002 19:39:52.944461 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:53.945214 kubelet[1562]: E1002 19:39:53.945187 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.064248 env[1150]: time="2023-10-02T19:39:54.064209250Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:39:54.070432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012969090.mount: Deactivated successfully. Oct 2 19:39:54.073215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966484686.mount: Deactivated successfully. Oct 2 19:39:54.074698 env[1150]: time="2023-10-02T19:39:54.074672746Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\"" Oct 2 19:39:54.075207 env[1150]: time="2023-10-02T19:39:54.075192353Z" level=info msg="StartContainer for \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\"" Oct 2 19:39:54.087400 systemd[1]: Started cri-containerd-ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93.scope. Oct 2 19:39:54.097430 systemd[1]: cri-containerd-ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93.scope: Deactivated successfully. Oct 2 19:39:54.097586 systemd[1]: Stopped cri-containerd-ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93.scope. Oct 2 19:39:54.102484 env[1150]: time="2023-10-02T19:39:54.102450706Z" level=info msg="shim disconnected" id=ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93 Oct 2 19:39:54.102625 env[1150]: time="2023-10-02T19:39:54.102614165Z" level=warning msg="cleaning up after shim disconnected" id=ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93 namespace=k8s.io Oct 2 19:39:54.102675 env[1150]: time="2023-10-02T19:39:54.102665638Z" level=info msg="cleaning up dead shim" Oct 2 19:39:54.107326 env[1150]: time="2023-10-02T19:39:54.107303005Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1996 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:54.107451 env[1150]: time="2023-10-02T19:39:54.107420313Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:39:54.107561 env[1150]: time="2023-10-02T19:39:54.107540626Z" level=error msg="Failed to pipe stdout of container \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\"" error="reading from a closed fifo" Oct 2 19:39:54.107593 env[1150]: time="2023-10-02T19:39:54.107580151Z" level=error msg="Failed to pipe stderr of container \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\"" error="reading from a closed fifo" Oct 2 19:39:54.108038 env[1150]: time="2023-10-02T19:39:54.108018232Z" level=error msg="StartContainer for \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:54.108405 kubelet[1562]: E1002 19:39:54.108130 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93" Oct 2 19:39:54.108405 kubelet[1562]: E1002 19:39:54.108197 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:54.108405 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:54.108405 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:39:54.108546 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x4lwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:54.108596 kubelet[1562]: E1002 19:39:54.108221 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:39:54.120534 kubelet[1562]: I1002 19:39:54.120322 1562 scope.go:115] "RemoveContainer" containerID="8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5" Oct 2 19:39:54.120534 kubelet[1562]: I1002 19:39:54.120502 1562 scope.go:115] "RemoveContainer" containerID="8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5" Oct 2 19:39:54.121219 env[1150]: time="2023-10-02T19:39:54.121184565Z" level=info msg="RemoveContainer for \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\"" Oct 2 19:39:54.121326 env[1150]: time="2023-10-02T19:39:54.121313429Z" level=info msg="RemoveContainer for \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\"" Oct 2 19:39:54.121428 env[1150]: time="2023-10-02T19:39:54.121407246Z" level=error msg="RemoveContainer for \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\" failed" error="failed to set removing state for container \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\": container is already in removing state" Oct 2 19:39:54.121596 kubelet[1562]: E1002 19:39:54.121567 1562 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\": container is already in removing state" containerID="8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5" Oct 2 19:39:54.121596 kubelet[1562]: I1002 19:39:54.121585 1562 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5} err="rpc error: code = Unknown desc = failed to set removing state for container \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\": container is already in removing state" Oct 2 19:39:54.123029 env[1150]: time="2023-10-02T19:39:54.122070677Z" level=info msg="RemoveContainer for \"8b92b8668b094565c38fe90199faed8d7941181dce5350121f3681cced2bc6e5\" returns successfully" Oct 2 19:39:54.123092 kubelet[1562]: E1002 19:39:54.122434 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:39:54.945465 kubelet[1562]: E1002 19:39:54.945416 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:55.068441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93-rootfs.mount: Deactivated successfully. Oct 2 19:39:55.945911 kubelet[1562]: E1002 19:39:55.945868 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.945988 kubelet[1562]: E1002 19:39:56.945954 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.987442 kubelet[1562]: E1002 19:39:56.987418 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:57.208188 kubelet[1562]: W1002 19:39:57.208100 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7aec30_deb4_4fea_8195_316940ef2335.slice/cri-containerd-ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93.scope WatchSource:0}: task ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93 not found: not found Oct 2 19:39:57.946934 kubelet[1562]: E1002 19:39:57.946908 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:58.947561 kubelet[1562]: E1002 19:39:58.947528 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:59.948634 kubelet[1562]: E1002 19:39:59.948605 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:00.949283 kubelet[1562]: E1002 19:40:00.949250 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:01.949951 kubelet[1562]: E1002 19:40:01.949918 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:01.988567 kubelet[1562]: E1002 19:40:01.988550 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:02.950302 kubelet[1562]: E1002 19:40:02.950275 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:03.950494 kubelet[1562]: E1002 19:40:03.950457 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:04.950608 kubelet[1562]: E1002 19:40:04.950578 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:05.950895 kubelet[1562]: E1002 19:40:05.950858 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.062356 kubelet[1562]: E1002 19:40:06.062305 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:40:06.913948 kubelet[1562]: E1002 19:40:06.913916 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.952073 kubelet[1562]: E1002 19:40:06.952055 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.989919 kubelet[1562]: E1002 19:40:06.989896 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:07.952857 kubelet[1562]: E1002 19:40:07.952828 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:08.953927 kubelet[1562]: E1002 19:40:08.953872 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:09.954416 kubelet[1562]: E1002 19:40:09.954383 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:10.955268 kubelet[1562]: E1002 19:40:10.955243 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:11.956619 kubelet[1562]: E1002 19:40:11.956590 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:11.990429 kubelet[1562]: E1002 19:40:11.990415 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:12.957106 kubelet[1562]: E1002 19:40:12.957072 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:13.958122 kubelet[1562]: E1002 19:40:13.958091 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:14.958260 kubelet[1562]: E1002 19:40:14.958229 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:15.959249 kubelet[1562]: E1002 19:40:15.959225 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:16.960610 kubelet[1562]: E1002 19:40:16.960585 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:16.991341 kubelet[1562]: E1002 19:40:16.991325 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:17.961234 kubelet[1562]: E1002 19:40:17.961199 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:18.961473 kubelet[1562]: E1002 19:40:18.961445 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:19.065217 env[1150]: time="2023-10-02T19:40:19.065167364Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:40:19.071701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4094536188.mount: Deactivated successfully. Oct 2 19:40:19.075364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970196032.mount: Deactivated successfully. Oct 2 19:40:19.077481 env[1150]: time="2023-10-02T19:40:19.077455456Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\"" Oct 2 19:40:19.078040 env[1150]: time="2023-10-02T19:40:19.077760402Z" level=info msg="StartContainer for \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\"" Oct 2 19:40:19.089679 systemd[1]: Started cri-containerd-c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174.scope. Oct 2 19:40:19.099230 systemd[1]: cri-containerd-c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174.scope: Deactivated successfully. Oct 2 19:40:19.099382 systemd[1]: Stopped cri-containerd-c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174.scope. Oct 2 19:40:19.105228 env[1150]: time="2023-10-02T19:40:19.105194714Z" level=info msg="shim disconnected" id=c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174 Oct 2 19:40:19.105228 env[1150]: time="2023-10-02T19:40:19.105224844Z" level=warning msg="cleaning up after shim disconnected" id=c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174 namespace=k8s.io Oct 2 19:40:19.105228 env[1150]: time="2023-10-02T19:40:19.105230672Z" level=info msg="cleaning up dead shim" Oct 2 19:40:19.109418 env[1150]: time="2023-10-02T19:40:19.109397585Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2038 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:19.109635 env[1150]: time="2023-10-02T19:40:19.109596922Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:40:19.110008 env[1150]: time="2023-10-02T19:40:19.109750347Z" level=error msg="Failed to pipe stdout of container \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\"" error="reading from a closed fifo" Oct 2 19:40:19.110084 env[1150]: time="2023-10-02T19:40:19.109980019Z" level=error msg="Failed to pipe stderr of container \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\"" error="reading from a closed fifo" Oct 2 19:40:19.110634 env[1150]: time="2023-10-02T19:40:19.110617728Z" level=error msg="StartContainer for \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:19.111624 kubelet[1562]: E1002 19:40:19.111314 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174" Oct 2 19:40:19.111624 kubelet[1562]: E1002 19:40:19.111384 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:19.111624 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:19.111624 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:40:19.111778 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x4lwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:19.111827 kubelet[1562]: E1002 19:40:19.111420 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:40:19.153039 kubelet[1562]: I1002 19:40:19.152628 1562 scope.go:115] "RemoveContainer" containerID="ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93" Oct 2 19:40:19.153039 kubelet[1562]: I1002 19:40:19.152896 1562 scope.go:115] "RemoveContainer" containerID="ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93" Oct 2 19:40:19.154092 env[1150]: time="2023-10-02T19:40:19.154069187Z" level=info msg="RemoveContainer for \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\"" Oct 2 19:40:19.154373 env[1150]: time="2023-10-02T19:40:19.154358215Z" level=info msg="RemoveContainer for \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\"" Oct 2 19:40:19.155383 env[1150]: time="2023-10-02T19:40:19.155230324Z" level=info msg="RemoveContainer for \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\" returns successfully" Oct 2 19:40:19.155523 env[1150]: time="2023-10-02T19:40:19.154558108Z" level=error msg="RemoveContainer for \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\" failed" error="failed to set removing state for container \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\": container is already in removing state" Oct 2 19:40:19.155668 kubelet[1562]: E1002 19:40:19.155655 1562 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93\": container is already in removing state" containerID="ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93" Oct 2 19:40:19.155719 kubelet[1562]: E1002 19:40:19.155676 1562 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ebd0914af1102770b6843b247ae396b8f6e125cfaca9eeaa2012f0b99eb40b93": container is already in removing state; Skipping pod "cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)" Oct 2 19:40:19.155852 kubelet[1562]: E1002 19:40:19.155835 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:40:19.961861 kubelet[1562]: E1002 19:40:19.961835 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:20.069863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174-rootfs.mount: Deactivated successfully. Oct 2 19:40:20.962659 kubelet[1562]: E1002 19:40:20.962611 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:21.963087 kubelet[1562]: E1002 19:40:21.963051 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:21.992781 kubelet[1562]: E1002 19:40:21.992754 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:22.210103 kubelet[1562]: W1002 19:40:22.210065 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7aec30_deb4_4fea_8195_316940ef2335.slice/cri-containerd-c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174.scope WatchSource:0}: task c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174 not found: not found Oct 2 19:40:22.963165 kubelet[1562]: E1002 19:40:22.963124 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:23.963436 kubelet[1562]: E1002 19:40:23.963394 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:24.963512 kubelet[1562]: E1002 19:40:24.963467 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:25.964601 kubelet[1562]: E1002 19:40:25.964553 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.914182 kubelet[1562]: E1002 19:40:26.914153 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.965001 kubelet[1562]: E1002 19:40:26.964969 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.993158 kubelet[1562]: E1002 19:40:26.993136 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:27.965485 kubelet[1562]: E1002 19:40:27.965444 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:28.966251 kubelet[1562]: E1002 19:40:28.966220 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.966702 kubelet[1562]: E1002 19:40:29.966666 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:30.967686 kubelet[1562]: E1002 19:40:30.967654 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.063005 kubelet[1562]: E1002 19:40:31.062983 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:40:31.968342 kubelet[1562]: E1002 19:40:31.968308 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.993945 kubelet[1562]: E1002 19:40:31.993927 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:32.969201 kubelet[1562]: E1002 19:40:32.969173 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:33.969299 kubelet[1562]: E1002 19:40:33.969270 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:34.970327 kubelet[1562]: E1002 19:40:34.970303 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:35.971198 kubelet[1562]: E1002 19:40:35.971167 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.971914 kubelet[1562]: E1002 19:40:36.971858 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.995083 kubelet[1562]: E1002 19:40:36.995070 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:37.972398 kubelet[1562]: E1002 19:40:37.972362 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:38.973271 kubelet[1562]: E1002 19:40:38.973236 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:39.973896 kubelet[1562]: E1002 19:40:39.973869 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:40.974761 kubelet[1562]: E1002 19:40:40.974712 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.975092 kubelet[1562]: E1002 19:40:41.975037 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.996583 kubelet[1562]: E1002 19:40:41.996569 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:42.975771 kubelet[1562]: E1002 19:40:42.975746 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:43.976123 kubelet[1562]: E1002 19:40:43.976087 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:44.062326 kubelet[1562]: E1002 19:40:44.062304 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:40:44.977143 kubelet[1562]: E1002 19:40:44.977114 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:45.978368 kubelet[1562]: E1002 19:40:45.978335 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:46.914280 kubelet[1562]: E1002 19:40:46.914253 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:46.978939 kubelet[1562]: E1002 19:40:46.978906 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:46.998000 kubelet[1562]: E1002 19:40:46.997985 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:47.978996 kubelet[1562]: E1002 19:40:47.978972 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:48.980134 kubelet[1562]: E1002 19:40:48.980105 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:49.980813 kubelet[1562]: E1002 19:40:49.980784 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:50.981476 kubelet[1562]: E1002 19:40:50.981441 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:51.981692 kubelet[1562]: E1002 19:40:51.981664 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:51.999273 kubelet[1562]: E1002 19:40:51.999251 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:52.982459 kubelet[1562]: E1002 19:40:52.982433 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:53.983408 kubelet[1562]: E1002 19:40:53.983362 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:54.984100 kubelet[1562]: E1002 19:40:54.984063 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:55.984538 kubelet[1562]: E1002 19:40:55.984505 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.062682 kubelet[1562]: E1002 19:40:56.062661 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:40:56.985438 kubelet[1562]: E1002 19:40:56.985392 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.999773 kubelet[1562]: E1002 19:40:56.999751 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:57.986159 kubelet[1562]: E1002 19:40:57.986120 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:58.986421 kubelet[1562]: E1002 19:40:58.986389 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:59.987276 kubelet[1562]: E1002 19:40:59.987246 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:00.988700 kubelet[1562]: E1002 19:41:00.988660 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:01.989761 kubelet[1562]: E1002 19:41:01.989733 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:02.000600 kubelet[1562]: E1002 19:41:02.000580 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:02.990933 kubelet[1562]: E1002 19:41:02.990908 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:03.991749 kubelet[1562]: E1002 19:41:03.991724 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:04.992563 kubelet[1562]: E1002 19:41:04.992534 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:05.992915 kubelet[1562]: E1002 19:41:05.992886 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:06.913810 kubelet[1562]: E1002 19:41:06.913777 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:06.994278 kubelet[1562]: E1002 19:41:06.994261 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:07.000928 kubelet[1562]: E1002 19:41:07.000918 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:07.065289 env[1150]: time="2023-10-02T19:41:07.065222892Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:41:07.071179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386710409.mount: Deactivated successfully. Oct 2 19:41:07.074074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327678686.mount: Deactivated successfully. Oct 2 19:41:07.075840 env[1150]: time="2023-10-02T19:41:07.075819120Z" level=info msg="CreateContainer within sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\"" Oct 2 19:41:07.076225 env[1150]: time="2023-10-02T19:41:07.076185101Z" level=info msg="StartContainer for \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\"" Oct 2 19:41:07.086902 systemd[1]: Started cri-containerd-c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1.scope. Oct 2 19:41:07.093988 systemd[1]: cri-containerd-c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1.scope: Deactivated successfully. Oct 2 19:41:07.094156 systemd[1]: Stopped cri-containerd-c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1.scope. Oct 2 19:41:07.098635 env[1150]: time="2023-10-02T19:41:07.098606305Z" level=info msg="shim disconnected" id=c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1 Oct 2 19:41:07.098744 env[1150]: time="2023-10-02T19:41:07.098732833Z" level=warning msg="cleaning up after shim disconnected" id=c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1 namespace=k8s.io Oct 2 19:41:07.098797 env[1150]: time="2023-10-02T19:41:07.098781625Z" level=info msg="cleaning up dead shim" Oct 2 19:41:07.104623 env[1150]: time="2023-10-02T19:41:07.104593758Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2083 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:07.104755 env[1150]: time="2023-10-02T19:41:07.104721376Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:41:07.106156 env[1150]: time="2023-10-02T19:41:07.106133579Z" level=error msg="Failed to pipe stdout of container \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\"" error="reading from a closed fifo" Oct 2 19:41:07.106227 env[1150]: time="2023-10-02T19:41:07.106175443Z" level=error msg="Failed to pipe stderr of container \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\"" error="reading from a closed fifo" Oct 2 19:41:07.106668 env[1150]: time="2023-10-02T19:41:07.106647118Z" level=error msg="StartContainer for \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:07.106772 kubelet[1562]: E1002 19:41:07.106757 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1" Oct 2 19:41:07.106834 kubelet[1562]: E1002 19:41:07.106822 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:07.106834 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:07.106834 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:41:07.106834 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x4lwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:07.106940 kubelet[1562]: E1002 19:41:07.106846 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:41:07.209046 kubelet[1562]: I1002 19:41:07.208293 1562 scope.go:115] "RemoveContainer" containerID="c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174" Oct 2 19:41:07.209046 kubelet[1562]: I1002 19:41:07.208549 1562 scope.go:115] "RemoveContainer" containerID="c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174" Oct 2 19:41:07.211001 env[1150]: time="2023-10-02T19:41:07.210890181Z" level=info msg="RemoveContainer for \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\"" Oct 2 19:41:07.212059 env[1150]: time="2023-10-02T19:41:07.212005294Z" level=info msg="RemoveContainer for \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\" returns successfully" Oct 2 19:41:07.212488 env[1150]: time="2023-10-02T19:41:07.212428263Z" level=error msg="ContainerStatus for \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\": not found" Oct 2 19:41:07.212668 kubelet[1562]: E1002 19:41:07.212657 1562 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174\": not found" containerID="c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174" Oct 2 19:41:07.212770 kubelet[1562]: E1002 19:41:07.212761 1562 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": failed to get container status "c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174": rpc error: code = NotFound desc = an error occurred when try to find container "c205d99ad5c3e0740b0c529d0727d1c1cf41a89fc2aeb35016f22cc92d06d174": not found; Skipping pod "cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)" Oct 2 19:41:07.213056 kubelet[1562]: E1002 19:41:07.213047 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:41:07.995499 kubelet[1562]: E1002 19:41:07.995472 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:08.069291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1-rootfs.mount: Deactivated successfully. Oct 2 19:41:08.996817 kubelet[1562]: E1002 19:41:08.996762 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.997802 kubelet[1562]: E1002 19:41:09.997769 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:10.205042 kubelet[1562]: W1002 19:41:10.204595 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7aec30_deb4_4fea_8195_316940ef2335.slice/cri-containerd-c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1.scope WatchSource:0}: task c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1 not found: not found Oct 2 19:41:10.998097 kubelet[1562]: E1002 19:41:10.998057 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.998419 kubelet[1562]: E1002 19:41:11.998386 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:12.002359 kubelet[1562]: E1002 19:41:12.002325 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:12.999342 kubelet[1562]: E1002 19:41:12.999307 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:13.999630 kubelet[1562]: E1002 19:41:13.999602 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:15.000797 kubelet[1562]: E1002 19:41:15.000765 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:16.001647 kubelet[1562]: E1002 19:41:16.001615 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:17.002724 kubelet[1562]: E1002 19:41:17.002692 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:17.003038 kubelet[1562]: E1002 19:41:17.002981 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:18.002973 kubelet[1562]: E1002 19:41:18.002942 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:19.003691 kubelet[1562]: E1002 19:41:19.003665 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:19.062667 kubelet[1562]: E1002 19:41:19.062645 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:41:20.004743 kubelet[1562]: E1002 19:41:20.004706 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:21.005753 kubelet[1562]: E1002 19:41:21.005718 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:22.003903 kubelet[1562]: E1002 19:41:22.003837 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:22.005989 kubelet[1562]: E1002 19:41:22.005971 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:23.006622 kubelet[1562]: E1002 19:41:23.006588 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:24.006989 kubelet[1562]: E1002 19:41:24.006964 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:25.007543 kubelet[1562]: E1002 19:41:25.007515 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.008434 kubelet[1562]: E1002 19:41:26.008404 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.913982 kubelet[1562]: E1002 19:41:26.913960 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:27.004580 kubelet[1562]: E1002 19:41:27.004557 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:27.009681 kubelet[1562]: E1002 19:41:27.009664 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:28.010596 kubelet[1562]: E1002 19:41:28.010569 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:29.011462 kubelet[1562]: E1002 19:41:29.011434 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:30.012609 kubelet[1562]: E1002 19:41:30.012576 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:31.013510 kubelet[1562]: E1002 19:41:31.013477 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:32.005929 kubelet[1562]: E1002 19:41:32.005908 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:32.014123 kubelet[1562]: E1002 19:41:32.014104 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:32.062642 kubelet[1562]: E1002 19:41:32.062624 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:41:33.014196 kubelet[1562]: E1002 19:41:33.014170 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:34.014763 kubelet[1562]: E1002 19:41:34.014729 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:35.015160 kubelet[1562]: E1002 19:41:35.015131 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:36.015807 kubelet[1562]: E1002 19:41:36.015779 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:37.007425 kubelet[1562]: E1002 19:41:37.007404 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:37.016818 kubelet[1562]: E1002 19:41:37.016772 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:38.016859 kubelet[1562]: E1002 19:41:38.016828 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.018157 kubelet[1562]: E1002 19:41:39.018128 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:40.018630 kubelet[1562]: E1002 19:41:40.018601 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:41.019829 kubelet[1562]: E1002 19:41:41.019799 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:42.008940 kubelet[1562]: E1002 19:41:42.008905 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:42.020486 kubelet[1562]: E1002 19:41:42.020428 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:43.021063 kubelet[1562]: E1002 19:41:43.021029 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:44.021164 kubelet[1562]: E1002 19:41:44.021136 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:45.022035 kubelet[1562]: E1002 19:41:45.021993 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:46.022675 kubelet[1562]: E1002 19:41:46.022649 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:46.062917 kubelet[1562]: E1002 19:41:46.062898 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:41:46.914145 kubelet[1562]: E1002 19:41:46.914121 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:47.010084 kubelet[1562]: E1002 19:41:47.010065 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:47.023487 kubelet[1562]: E1002 19:41:47.023455 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:48.023850 kubelet[1562]: E1002 19:41:48.023826 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:49.024230 kubelet[1562]: E1002 19:41:49.024207 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:50.025470 kubelet[1562]: E1002 19:41:50.025446 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:51.026762 kubelet[1562]: E1002 19:41:51.026736 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:52.010624 kubelet[1562]: E1002 19:41:52.010597 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:52.027803 kubelet[1562]: E1002 19:41:52.027774 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:53.029218 kubelet[1562]: E1002 19:41:53.029195 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:54.029702 kubelet[1562]: E1002 19:41:54.029671 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:55.030527 kubelet[1562]: E1002 19:41:55.030500 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:56.031620 kubelet[1562]: E1002 19:41:56.031592 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:57.011797 kubelet[1562]: E1002 19:41:57.011769 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:57.032097 kubelet[1562]: E1002 19:41:57.032077 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:57.062778 kubelet[1562]: E1002 19:41:57.062756 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:41:58.032748 kubelet[1562]: E1002 19:41:58.032704 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:59.033549 kubelet[1562]: E1002 19:41:59.033518 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:00.034309 kubelet[1562]: E1002 19:42:00.034250 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:01.035159 kubelet[1562]: E1002 19:42:01.035115 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:02.013227 kubelet[1562]: E1002 19:42:02.013198 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:02.035811 kubelet[1562]: E1002 19:42:02.035760 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:03.036289 kubelet[1562]: E1002 19:42:03.036249 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:04.036854 kubelet[1562]: E1002 19:42:04.036809 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:05.037500 kubelet[1562]: E1002 19:42:05.037461 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.037677 kubelet[1562]: E1002 19:42:06.037638 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.914245 kubelet[1562]: E1002 19:42:06.914219 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:07.013528 kubelet[1562]: E1002 19:42:07.013504 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:07.038736 kubelet[1562]: E1002 19:42:07.038703 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:08.039406 kubelet[1562]: E1002 19:42:08.039379 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:09.040292 kubelet[1562]: E1002 19:42:09.040237 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:10.041296 kubelet[1562]: E1002 19:42:10.041265 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:10.063305 kubelet[1562]: E1002 19:42:10.063284 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-cgbrl_kube-system(cd7aec30-deb4-4fea-8195-316940ef2335)\"" pod="kube-system/cilium-cgbrl" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 Oct 2 19:42:11.042505 kubelet[1562]: E1002 19:42:11.042445 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:12.014264 kubelet[1562]: E1002 19:42:12.014234 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:12.043486 kubelet[1562]: E1002 19:42:12.043458 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:13.044299 kubelet[1562]: E1002 19:42:13.044268 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:14.045574 kubelet[1562]: E1002 19:42:14.045548 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:15.046846 kubelet[1562]: E1002 19:42:15.046815 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:16.048281 kubelet[1562]: E1002 19:42:16.048246 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:17.015215 kubelet[1562]: E1002 19:42:17.015196 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:17.048612 kubelet[1562]: E1002 19:42:17.048585 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:18.049722 kubelet[1562]: E1002 19:42:18.049693 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:19.050277 kubelet[1562]: E1002 19:42:19.050249 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:20.051212 kubelet[1562]: E1002 19:42:20.051176 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:21.051709 kubelet[1562]: E1002 19:42:21.051680 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:22.015986 kubelet[1562]: E1002 19:42:22.015940 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:22.052640 kubelet[1562]: E1002 19:42:22.052603 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:23.053684 kubelet[1562]: E1002 19:42:23.053655 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:24.054042 kubelet[1562]: E1002 19:42:24.054004 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:24.801806 env[1150]: time="2023-10-02T19:42:24.801743103Z" level=info msg="StopPodSandbox for \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\"" Oct 2 19:42:24.801806 env[1150]: time="2023-10-02T19:42:24.801807397Z" level=info msg="Container to stop \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:42:24.802764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462-shm.mount: Deactivated successfully. Oct 2 19:42:24.812611 kernel: kauditd_printk_skb: 279 callbacks suppressed Oct 2 19:42:24.812723 kernel: audit: type=1334 audit(1696275744.808:668): prog-id=71 op=UNLOAD Oct 2 19:42:24.808000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:42:24.809160 systemd[1]: cri-containerd-d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462.scope: Deactivated successfully. Oct 2 19:42:24.812000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:42:24.815026 kernel: audit: type=1334 audit(1696275744.812:669): prog-id=74 op=UNLOAD Oct 2 19:42:24.828249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462-rootfs.mount: Deactivated successfully. Oct 2 19:42:24.851478 env[1150]: time="2023-10-02T19:42:24.851434618Z" level=info msg="shim disconnected" id=d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462 Oct 2 19:42:24.851478 env[1150]: time="2023-10-02T19:42:24.851470037Z" level=warning msg="cleaning up after shim disconnected" id=d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462 namespace=k8s.io Oct 2 19:42:24.851478 env[1150]: time="2023-10-02T19:42:24.851476970Z" level=info msg="cleaning up dead shim" Oct 2 19:42:24.857980 env[1150]: time="2023-10-02T19:42:24.857948240Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2117 runtime=io.containerd.runc.v2\n" Oct 2 19:42:24.858169 env[1150]: time="2023-10-02T19:42:24.858149948Z" level=info msg="TearDown network for sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" successfully" Oct 2 19:42:24.858214 env[1150]: time="2023-10-02T19:42:24.858167422Z" level=info msg="StopPodSandbox for \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" returns successfully" Oct 2 19:42:24.948000 kubelet[1562]: I1002 19:42:24.947172 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-etc-cni-netd\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948000 kubelet[1562]: I1002 19:42:24.947200 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-run\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948000 kubelet[1562]: I1002 19:42:24.947210 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-xtables-lock\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948000 kubelet[1562]: I1002 19:42:24.947221 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cni-path\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948000 kubelet[1562]: I1002 19:42:24.947246 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-hubble-tls\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948000 kubelet[1562]: I1002 19:42:24.947257 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-cgroup\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948317 kubelet[1562]: I1002 19:42:24.947270 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd7aec30-deb4-4fea-8195-316940ef2335-clustermesh-secrets\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948317 kubelet[1562]: I1002 19:42:24.947281 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-kernel\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948317 kubelet[1562]: I1002 19:42:24.947290 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-bpf-maps\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948317 kubelet[1562]: I1002 19:42:24.947301 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-config-path\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948317 kubelet[1562]: I1002 19:42:24.947322 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4lwq\" (UniqueName: \"kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-kube-api-access-x4lwq\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948317 kubelet[1562]: I1002 19:42:24.947334 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-hostproc\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948523 kubelet[1562]: I1002 19:42:24.947344 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-net\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948523 kubelet[1562]: I1002 19:42:24.947356 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-lib-modules\") pod \"cd7aec30-deb4-4fea-8195-316940ef2335\" (UID: \"cd7aec30-deb4-4fea-8195-316940ef2335\") " Oct 2 19:42:24.948523 kubelet[1562]: I1002 19:42:24.947374 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.948523 kubelet[1562]: I1002 19:42:24.947136 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.948523 kubelet[1562]: I1002 19:42:24.947416 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.948729 kubelet[1562]: I1002 19:42:24.947426 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.948729 kubelet[1562]: I1002 19:42:24.947433 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cni-path" (OuterVolumeSpecName: "cni-path") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.948729 kubelet[1562]: I1002 19:42:24.947685 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.948729 kubelet[1562]: I1002 19:42:24.947718 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.948729 kubelet[1562]: W1002 19:42:24.948051 1562 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cd7aec30-deb4-4fea-8195-316940ef2335/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:42:24.949547 kubelet[1562]: I1002 19:42:24.949042 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.949547 kubelet[1562]: I1002 19:42:24.949072 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-hostproc" (OuterVolumeSpecName: "hostproc") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.949648 kubelet[1562]: I1002 19:42:24.949560 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:42:24.949648 kubelet[1562]: I1002 19:42:24.949588 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:24.951766 systemd[1]: var-lib-kubelet-pods-cd7aec30\x2ddeb4\x2d4fea\x2d8195\x2d316940ef2335-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:42:24.952740 systemd[1]: var-lib-kubelet-pods-cd7aec30\x2ddeb4\x2d4fea\x2d8195\x2d316940ef2335-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:42:24.953592 kubelet[1562]: I1002 19:42:24.953565 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd7aec30-deb4-4fea-8195-316940ef2335-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:42:24.953650 kubelet[1562]: I1002 19:42:24.953625 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:24.955539 systemd[1]: var-lib-kubelet-pods-cd7aec30\x2ddeb4\x2d4fea\x2d8195\x2d316940ef2335-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx4lwq.mount: Deactivated successfully. Oct 2 19:42:24.956134 kubelet[1562]: I1002 19:42:24.956107 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-kube-api-access-x4lwq" (OuterVolumeSpecName: "kube-api-access-x4lwq") pod "cd7aec30-deb4-4fea-8195-316940ef2335" (UID: "cd7aec30-deb4-4fea-8195-316940ef2335"). InnerVolumeSpecName "kube-api-access-x4lwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:25.048487 kubelet[1562]: I1002 19:42:25.048457 1562 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-bpf-maps\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048487 kubelet[1562]: I1002 19:42:25.048485 1562 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-kernel\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048497 1562 reconciler.go:399] "Volume detached for volume \"kube-api-access-x4lwq\" (UniqueName: \"kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-kube-api-access-x4lwq\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048507 1562 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-hostproc\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048518 1562 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-host-proc-sys-net\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048528 1562 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-lib-modules\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048538 1562 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-config-path\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048547 1562 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-xtables-lock\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048557 1562 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cni-path\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048617 kubelet[1562]: I1002 19:42:25.048566 1562 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd7aec30-deb4-4fea-8195-316940ef2335-hubble-tls\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048767 kubelet[1562]: I1002 19:42:25.048575 1562 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-etc-cni-netd\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048767 kubelet[1562]: I1002 19:42:25.048585 1562 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-run\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048767 kubelet[1562]: I1002 19:42:25.048595 1562 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd7aec30-deb4-4fea-8195-316940ef2335-clustermesh-secrets\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.048767 kubelet[1562]: I1002 19:42:25.048604 1562 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd7aec30-deb4-4fea-8195-316940ef2335-cilium-cgroup\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:25.056211 kubelet[1562]: E1002 19:42:25.054575 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:25.066023 systemd[1]: Removed slice kubepods-burstable-podcd7aec30_deb4_4fea_8195_316940ef2335.slice. Oct 2 19:42:25.299769 kubelet[1562]: I1002 19:42:25.299750 1562 scope.go:115] "RemoveContainer" containerID="c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1" Oct 2 19:42:25.300799 env[1150]: time="2023-10-02T19:42:25.300772149Z" level=info msg="RemoveContainer for \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\"" Oct 2 19:42:25.305713 env[1150]: time="2023-10-02T19:42:25.305681191Z" level=info msg="RemoveContainer for \"c164c51aa489ee82d35a422204a5784d4965737ed3e8124031eab47ebfaf94a1\" returns successfully" Oct 2 19:42:25.328061 kubelet[1562]: I1002 19:42:25.327715 1562 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:25.328280 kubelet[1562]: E1002 19:42:25.328269 1562 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328337 kubelet[1562]: E1002 19:42:25.328329 1562 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328391 kubelet[1562]: E1002 19:42:25.328383 1562 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328448 kubelet[1562]: E1002 19:42:25.328440 1562 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328514 kubelet[1562]: I1002 19:42:25.328505 1562 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328563 kubelet[1562]: I1002 19:42:25.328555 1562 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328613 kubelet[1562]: I1002 19:42:25.328606 1562 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328669 kubelet[1562]: I1002 19:42:25.328662 1562 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328727 kubelet[1562]: E1002 19:42:25.328720 1562 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.328806 kubelet[1562]: I1002 19:42:25.328798 1562 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd7aec30-deb4-4fea-8195-316940ef2335" containerName="mount-cgroup" Oct 2 19:42:25.332740 systemd[1]: Created slice kubepods-burstable-pod730fa6c2_59ce_405f_81b2_b6e0e6f2e259.slice. Oct 2 19:42:25.349904 kubelet[1562]: I1002 19:42:25.349875 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cni-path\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350139 kubelet[1562]: I1002 19:42:25.350120 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-etc-cni-netd\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350219 kubelet[1562]: I1002 19:42:25.350143 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-clustermesh-secrets\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350219 kubelet[1562]: I1002 19:42:25.350159 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-net\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350219 kubelet[1562]: I1002 19:42:25.350173 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-run\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350219 kubelet[1562]: I1002 19:42:25.350184 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-kernel\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350219 kubelet[1562]: I1002 19:42:25.350200 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25nd\" (UniqueName: \"kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-kube-api-access-t25nd\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350219 kubelet[1562]: I1002 19:42:25.350212 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-lib-modules\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350414 kubelet[1562]: I1002 19:42:25.350222 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-xtables-lock\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350414 kubelet[1562]: I1002 19:42:25.350233 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hubble-tls\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350414 kubelet[1562]: I1002 19:42:25.350249 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hostproc\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350414 kubelet[1562]: I1002 19:42:25.350260 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-cgroup\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350414 kubelet[1562]: I1002 19:42:25.350272 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-config-path\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.350414 kubelet[1562]: I1002 19:42:25.350283 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-bpf-maps\") pod \"cilium-z57hl\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " pod="kube-system/cilium-z57hl" Oct 2 19:42:25.639634 env[1150]: time="2023-10-02T19:42:25.639518769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z57hl,Uid:730fa6c2-59ce-405f-81b2-b6e0e6f2e259,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:25.645901 env[1150]: time="2023-10-02T19:42:25.645851719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:25.645901 env[1150]: time="2023-10-02T19:42:25.645899557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:25.646040 env[1150]: time="2023-10-02T19:42:25.645916638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:25.646302 env[1150]: time="2023-10-02T19:42:25.646258595Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074 pid=2146 runtime=io.containerd.runc.v2 Oct 2 19:42:25.655203 systemd[1]: Started cri-containerd-7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074.scope. Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.675454 kernel: audit: type=1400 audit(1696275745.669:670): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.675501 kernel: audit: type=1400 audit(1696275745.669:671): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.675537 kernel: audit: type=1400 audit(1696275745.669:672): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677783 kernel: audit: type=1400 audit(1696275745.669:673): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.682561 kernel: audit: type=1400 audit(1696275745.669:674): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.682601 kernel: audit: type=1400 audit(1696275745.669:675): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.687344 kernel: audit: type=1400 audit(1696275745.669:676): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.687390 kernel: audit: type=1400 audit(1696275745.669:677): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit: BPF prog-id=78 op=LOAD Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2146 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:25.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386338326432376433363534633730626237383739636232373862 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=2146 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:25.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386338326432376433363534633730626237383739636232373862 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.672000 audit: BPF prog-id=79 op=LOAD Oct 2 19:42:25.672000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c00039d7e0 items=0 ppid=2146 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:25.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386338326432376433363534633730626237383739636232373862 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.677000 audit: BPF prog-id=80 op=LOAD Oct 2 19:42:25.677000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c00039d828 items=0 ppid=2146 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:25.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386338326432376433363534633730626237383739636232373862 Oct 2 19:42:25.684000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:42:25.684000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:25.684000 audit: BPF prog-id=81 op=LOAD Oct 2 19:42:25.684000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c00039dc38 items=0 ppid=2146 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:25.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386338326432376433363534633730626237383739636232373862 Oct 2 19:42:25.703833 env[1150]: time="2023-10-02T19:42:25.703807605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z57hl,Uid:730fa6c2-59ce-405f-81b2-b6e0e6f2e259,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\"" Oct 2 19:42:25.705587 env[1150]: time="2023-10-02T19:42:25.705564858Z" level=info msg="CreateContainer within sandbox \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:42:25.740314 env[1150]: time="2023-10-02T19:42:25.740268609Z" level=info msg="CreateContainer within sandbox \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\"" Oct 2 19:42:25.740793 env[1150]: time="2023-10-02T19:42:25.740777803Z" level=info msg="StartContainer for \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\"" Oct 2 19:42:25.754434 systemd[1]: Started cri-containerd-59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5.scope. Oct 2 19:42:25.766134 systemd[1]: cri-containerd-59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5.scope: Deactivated successfully. Oct 2 19:42:25.774520 env[1150]: time="2023-10-02T19:42:25.774477405Z" level=info msg="shim disconnected" id=59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5 Oct 2 19:42:25.774685 env[1150]: time="2023-10-02T19:42:25.774673750Z" level=warning msg="cleaning up after shim disconnected" id=59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5 namespace=k8s.io Oct 2 19:42:25.774766 env[1150]: time="2023-10-02T19:42:25.774757599Z" level=info msg="cleaning up dead shim" Oct 2 19:42:25.779840 env[1150]: time="2023-10-02T19:42:25.779805570Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2205 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:25.780114 env[1150]: time="2023-10-02T19:42:25.780080014Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed" Oct 2 19:42:25.781086 env[1150]: time="2023-10-02T19:42:25.780284523Z" level=error msg="Failed to pipe stderr of container \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\"" error="reading from a closed fifo" Oct 2 19:42:25.781137 env[1150]: time="2023-10-02T19:42:25.781051706Z" level=error msg="Failed to pipe stdout of container \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\"" error="reading from a closed fifo" Oct 2 19:42:25.781687 env[1150]: time="2023-10-02T19:42:25.781662158Z" level=error msg="StartContainer for \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:25.781884 kubelet[1562]: E1002 19:42:25.781865 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5" Oct 2 19:42:25.781947 kubelet[1562]: E1002 19:42:25.781937 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:25.781947 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:25.781947 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:42:25.781947 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-t25nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-z57hl_kube-system(730fa6c2-59ce-405f-81b2-b6e0e6f2e259): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:25.782111 kubelet[1562]: E1002 19:42:25.781959 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-z57hl" podUID=730fa6c2-59ce-405f-81b2-b6e0e6f2e259 Oct 2 19:42:26.057105 kubelet[1562]: E1002 19:42:26.056973 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:26.302628 env[1150]: time="2023-10-02T19:42:26.302536603Z" level=info msg="StopPodSandbox for \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\"" Oct 2 19:42:26.302979 env[1150]: time="2023-10-02T19:42:26.302959886Z" level=info msg="Container to stop \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:42:26.304111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074-shm.mount: Deactivated successfully. Oct 2 19:42:26.309000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:42:26.310505 systemd[1]: cri-containerd-7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074.scope: Deactivated successfully. Oct 2 19:42:26.313000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:42:26.325586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074-rootfs.mount: Deactivated successfully. Oct 2 19:42:26.338468 env[1150]: time="2023-10-02T19:42:26.338427093Z" level=info msg="shim disconnected" id=7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074 Oct 2 19:42:26.338468 env[1150]: time="2023-10-02T19:42:26.338461403Z" level=warning msg="cleaning up after shim disconnected" id=7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074 namespace=k8s.io Oct 2 19:42:26.338468 env[1150]: time="2023-10-02T19:42:26.338467845Z" level=info msg="cleaning up dead shim" Oct 2 19:42:26.343372 env[1150]: time="2023-10-02T19:42:26.343341649Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2237 runtime=io.containerd.runc.v2\n" Oct 2 19:42:26.343660 env[1150]: time="2023-10-02T19:42:26.343643836Z" level=info msg="TearDown network for sandbox \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" successfully" Oct 2 19:42:26.343732 env[1150]: time="2023-10-02T19:42:26.343719839Z" level=info msg="StopPodSandbox for \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" returns successfully" Oct 2 19:42:26.358529 kubelet[1562]: I1002 19:42:26.357564 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-xtables-lock\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358529 kubelet[1562]: I1002 19:42:26.357615 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.358529 kubelet[1562]: I1002 19:42:26.357665 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-clustermesh-secrets\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358529 kubelet[1562]: I1002 19:42:26.357681 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-lib-modules\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358529 kubelet[1562]: I1002 19:42:26.357697 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hostproc\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358529 kubelet[1562]: I1002 19:42:26.357708 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cni-path\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358770 kubelet[1562]: I1002 19:42:26.357718 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-bpf-maps\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358770 kubelet[1562]: I1002 19:42:26.357739 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hubble-tls\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358770 kubelet[1562]: I1002 19:42:26.357751 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-run\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358770 kubelet[1562]: I1002 19:42:26.357763 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-kernel\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358770 kubelet[1562]: I1002 19:42:26.357775 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t25nd\" (UniqueName: \"kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-kube-api-access-t25nd\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358770 kubelet[1562]: I1002 19:42:26.357785 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-cgroup\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358927 kubelet[1562]: I1002 19:42:26.357800 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-config-path\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358927 kubelet[1562]: I1002 19:42:26.357823 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-etc-cni-netd\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358927 kubelet[1562]: I1002 19:42:26.357834 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-net\") pod \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\" (UID: \"730fa6c2-59ce-405f-81b2-b6e0e6f2e259\") " Oct 2 19:42:26.358927 kubelet[1562]: I1002 19:42:26.357852 1562 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-xtables-lock\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.358927 kubelet[1562]: I1002 19:42:26.357865 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.358927 kubelet[1562]: I1002 19:42:26.357878 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.359439 kubelet[1562]: I1002 19:42:26.357904 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hostproc" (OuterVolumeSpecName: "hostproc") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.359439 kubelet[1562]: I1002 19:42:26.357914 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cni-path" (OuterVolumeSpecName: "cni-path") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.359439 kubelet[1562]: I1002 19:42:26.357923 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.359439 kubelet[1562]: I1002 19:42:26.358278 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.359439 kubelet[1562]: W1002 19:42:26.358355 1562 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/730fa6c2-59ce-405f-81b2-b6e0e6f2e259/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:42:26.359543 kubelet[1562]: I1002 19:42:26.359488 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:42:26.359543 kubelet[1562]: I1002 19:42:26.359515 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.359543 kubelet[1562]: I1002 19:42:26.359531 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.359543 kubelet[1562]: I1002 19:42:26.359541 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:26.364259 systemd[1]: var-lib-kubelet-pods-730fa6c2\x2d59ce\x2d405f\x2d81b2\x2db6e0e6f2e259-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt25nd.mount: Deactivated successfully. Oct 2 19:42:26.364336 systemd[1]: var-lib-kubelet-pods-730fa6c2\x2d59ce\x2d405f\x2d81b2\x2db6e0e6f2e259-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:42:26.366097 kubelet[1562]: I1002 19:42:26.366079 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:26.366218 kubelet[1562]: I1002 19:42:26.366188 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-kube-api-access-t25nd" (OuterVolumeSpecName: "kube-api-access-t25nd") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "kube-api-access-t25nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:26.366315 kubelet[1562]: I1002 19:42:26.366291 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "730fa6c2-59ce-405f-81b2-b6e0e6f2e259" (UID: "730fa6c2-59ce-405f-81b2-b6e0e6f2e259"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:42:26.458825 kubelet[1562]: I1002 19:42:26.458798 1562 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-etc-cni-netd\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.458980 kubelet[1562]: I1002 19:42:26.458972 1562 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-net\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459058 kubelet[1562]: I1002 19:42:26.459051 1562 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-host-proc-sys-kernel\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459111 kubelet[1562]: I1002 19:42:26.459104 1562 reconciler.go:399] "Volume detached for volume \"kube-api-access-t25nd\" (UniqueName: \"kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-kube-api-access-t25nd\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459165 kubelet[1562]: I1002 19:42:26.459159 1562 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-cgroup\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459215 kubelet[1562]: I1002 19:42:26.459209 1562 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-config-path\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459264 kubelet[1562]: I1002 19:42:26.459258 1562 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-clustermesh-secrets\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459315 kubelet[1562]: I1002 19:42:26.459309 1562 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-lib-modules\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459376 kubelet[1562]: I1002 19:42:26.459369 1562 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cni-path\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459428 kubelet[1562]: I1002 19:42:26.459421 1562 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-bpf-maps\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459477 kubelet[1562]: I1002 19:42:26.459470 1562 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hubble-tls\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459544 kubelet[1562]: I1002 19:42:26.459537 1562 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-hostproc\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.459598 kubelet[1562]: I1002 19:42:26.459591 1562 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/730fa6c2-59ce-405f-81b2-b6e0e6f2e259-cilium-run\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:42:26.802837 systemd[1]: var-lib-kubelet-pods-730fa6c2\x2d59ce\x2d405f\x2d81b2\x2db6e0e6f2e259-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:42:26.913574 kubelet[1562]: E1002 19:42:26.913548 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:27.016540 kubelet[1562]: E1002 19:42:27.016513 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:27.058121 kubelet[1562]: E1002 19:42:27.057855 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:27.063642 kubelet[1562]: I1002 19:42:27.063631 1562 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cd7aec30-deb4-4fea-8195-316940ef2335 path="/var/lib/kubelet/pods/cd7aec30-deb4-4fea-8195-316940ef2335/volumes" Oct 2 19:42:27.065653 systemd[1]: Removed slice kubepods-burstable-pod730fa6c2_59ce_405f_81b2_b6e0e6f2e259.slice. Oct 2 19:42:27.304901 kubelet[1562]: I1002 19:42:27.304865 1562 scope.go:115] "RemoveContainer" containerID="59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5" Oct 2 19:42:27.307146 env[1150]: time="2023-10-02T19:42:27.307095986Z" level=info msg="RemoveContainer for \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\"" Oct 2 19:42:27.308753 env[1150]: time="2023-10-02T19:42:27.308334033Z" level=info msg="RemoveContainer for \"59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5\" returns successfully" Oct 2 19:42:28.058211 kubelet[1562]: E1002 19:42:28.058182 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:28.846659 kubelet[1562]: I1002 19:42:28.846551 1562 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:28.846659 kubelet[1562]: E1002 19:42:28.846597 1562 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="730fa6c2-59ce-405f-81b2-b6e0e6f2e259" containerName="mount-cgroup" Oct 2 19:42:28.846659 kubelet[1562]: I1002 19:42:28.846617 1562 memory_manager.go:345] "RemoveStaleState removing state" podUID="730fa6c2-59ce-405f-81b2-b6e0e6f2e259" containerName="mount-cgroup" Oct 2 19:42:28.851787 systemd[1]: Created slice kubepods-burstable-pod86a61d5b_5b9e_4cb9_9bcc_8f9a53af5f70.slice. Oct 2 19:42:28.855962 kubelet[1562]: I1002 19:42:28.855931 1562 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:28.862103 systemd[1]: Created slice kubepods-besteffort-pod451fba2f_69f4_4b14_a66d_88bc5b3e2c72.slice. Oct 2 19:42:28.872495 kubelet[1562]: I1002 19:42:28.872465 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-clustermesh-secrets\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872619 kubelet[1562]: I1002 19:42:28.872551 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwnh\" (UniqueName: \"kubernetes.io/projected/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-kube-api-access-gzwnh\") pod \"cilium-operator-69b677f97c-vb7kb\" (UID: \"451fba2f-69f4-4b14-a66d-88bc5b3e2c72\") " pod="kube-system/cilium-operator-69b677f97c-vb7kb" Oct 2 19:42:28.872619 kubelet[1562]: I1002 19:42:28.872615 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-cgroup\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872694 kubelet[1562]: I1002 19:42:28.872639 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-net\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872726 kubelet[1562]: I1002 19:42:28.872695 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-kernel\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872726 kubelet[1562]: I1002 19:42:28.872716 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgnd\" (UniqueName: \"kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-kube-api-access-2jgnd\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872789 kubelet[1562]: I1002 19:42:28.872735 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-xtables-lock\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872817 kubelet[1562]: I1002 19:42:28.872788 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-ipsec-secrets\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872817 kubelet[1562]: I1002 19:42:28.872808 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hostproc\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872888 kubelet[1562]: I1002 19:42:28.872874 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-etc-cni-netd\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872934 kubelet[1562]: I1002 19:42:28.872921 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-lib-modules\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.872966 kubelet[1562]: I1002 19:42:28.872948 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hubble-tls\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.873022 kubelet[1562]: I1002 19:42:28.873002 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-cilium-config-path\") pod \"cilium-operator-69b677f97c-vb7kb\" (UID: \"451fba2f-69f4-4b14-a66d-88bc5b3e2c72\") " pod="kube-system/cilium-operator-69b677f97c-vb7kb" Oct 2 19:42:28.873068 kubelet[1562]: I1002 19:42:28.873044 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-run\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.873101 kubelet[1562]: I1002 19:42:28.873075 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-bpf-maps\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.873166 kubelet[1562]: I1002 19:42:28.873133 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cni-path\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.873227 kubelet[1562]: I1002 19:42:28.873213 1562 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-config-path\") pod \"cilium-krrnm\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " pod="kube-system/cilium-krrnm" Oct 2 19:42:28.877863 kubelet[1562]: W1002 19:42:28.877834 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod730fa6c2_59ce_405f_81b2_b6e0e6f2e259.slice/cri-containerd-59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5.scope WatchSource:0}: container "59e64a2a28413455f1f6eddaed933ee49d0f90994ed4e499d00cf36560f2eff5" in namespace "k8s.io": not found Oct 2 19:42:29.059330 kubelet[1562]: E1002 19:42:29.059307 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:29.064002 kubelet[1562]: I1002 19:42:29.063987 1562 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=730fa6c2-59ce-405f-81b2-b6e0e6f2e259 path="/var/lib/kubelet/pods/730fa6c2-59ce-405f-81b2-b6e0e6f2e259/volumes" Oct 2 19:42:29.160148 env[1150]: time="2023-10-02T19:42:29.160117152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krrnm,Uid:86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:29.164736 env[1150]: time="2023-10-02T19:42:29.164719706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-vb7kb,Uid:451fba2f-69f4-4b14-a66d-88bc5b3e2c72,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:29.176001 env[1150]: time="2023-10-02T19:42:29.175961757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:29.176154 env[1150]: time="2023-10-02T19:42:29.176140054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:29.176224 env[1150]: time="2023-10-02T19:42:29.176198612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:29.176264 env[1150]: time="2023-10-02T19:42:29.176219127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:29.176264 env[1150]: time="2023-10-02T19:42:29.176226229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:29.176327 env[1150]: time="2023-10-02T19:42:29.176285150Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6 pid=2276 runtime=io.containerd.runc.v2 Oct 2 19:42:29.176400 env[1150]: time="2023-10-02T19:42:29.176380331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:29.176601 env[1150]: time="2023-10-02T19:42:29.176569293Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759 pid=2265 runtime=io.containerd.runc.v2 Oct 2 19:42:29.185649 systemd[1]: Started cri-containerd-128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759.scope. Oct 2 19:42:29.190193 systemd[1]: Started cri-containerd-a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6.scope. Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.201000 audit: BPF prog-id=82 op=LOAD Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2265 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386438373039343164366666313462366162646638663661353139 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2265 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386438373039343164366666313462366162646638663661353139 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.202000 audit: BPF prog-id=83 op=LOAD Oct 2 19:42:29.202000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000024c60 items=0 ppid=2265 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386438373039343164366666313462366162646638663661353139 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit: BPF prog-id=84 op=LOAD Oct 2 19:42:29.203000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000024ca8 items=0 ppid=2265 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386438373039343164366666313462366162646638663661353139 Oct 2 19:42:29.203000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:42:29.203000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { perfmon } for pid=2293 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit[2293]: AVC avc: denied { bpf } for pid=2293 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.203000 audit: BPF prog-id=85 op=LOAD Oct 2 19:42:29.203000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0000250b8 items=0 ppid=2265 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132386438373039343164366666313462366162646638663661353139 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.204000 audit: BPF prog-id=86 op=LOAD Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2276 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396333643537623038323065316136353534326665316131346561 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2276 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396333643537623038323065316136353534326665316131346561 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.205000 audit: BPF prog-id=87 op=LOAD Oct 2 19:42:29.205000 audit[2294]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000368880 items=0 ppid=2276 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396333643537623038323065316136353534326665316131346561 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.206000 audit: BPF prog-id=88 op=LOAD Oct 2 19:42:29.206000 audit[2294]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0003688c8 items=0 ppid=2276 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396333643537623038323065316136353534326665316131346561 Oct 2 19:42:29.206000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:42:29.206000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { perfmon } for pid=2294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit[2294]: AVC avc: denied { bpf } for pid=2294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:29.207000 audit: BPF prog-id=89 op=LOAD Oct 2 19:42:29.207000 audit[2294]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000368cd8 items=0 ppid=2276 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:29.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396333643537623038323065316136353534326665316131346561 Oct 2 19:42:29.224635 env[1150]: time="2023-10-02T19:42:29.224605018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krrnm,Uid:86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70,Namespace:kube-system,Attempt:0,} returns sandbox id \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\"" Oct 2 19:42:29.227470 env[1150]: time="2023-10-02T19:42:29.227448688Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:42:29.233984 env[1150]: time="2023-10-02T19:42:29.233954845Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\"" Oct 2 19:42:29.234373 env[1150]: time="2023-10-02T19:42:29.234353996Z" level=info msg="StartContainer for \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\"" Oct 2 19:42:29.239102 env[1150]: time="2023-10-02T19:42:29.239065263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-vb7kb,Uid:451fba2f-69f4-4b14-a66d-88bc5b3e2c72,Namespace:kube-system,Attempt:0,} returns sandbox id \"a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6\"" Oct 2 19:42:29.240186 env[1150]: time="2023-10-02T19:42:29.240162368Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:42:29.247941 systemd[1]: Started cri-containerd-82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843.scope. Oct 2 19:42:29.257591 systemd[1]: cri-containerd-82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843.scope: Deactivated successfully. Oct 2 19:42:29.257805 systemd[1]: Stopped cri-containerd-82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843.scope. Oct 2 19:42:29.269655 env[1150]: time="2023-10-02T19:42:29.269612432Z" level=info msg="shim disconnected" id=82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843 Oct 2 19:42:29.269825 env[1150]: time="2023-10-02T19:42:29.269809168Z" level=warning msg="cleaning up after shim disconnected" id=82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843 namespace=k8s.io Oct 2 19:42:29.269888 env[1150]: time="2023-10-02T19:42:29.269877801Z" level=info msg="cleaning up dead shim" Oct 2 19:42:29.274730 env[1150]: time="2023-10-02T19:42:29.274706609Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:29.274952 env[1150]: time="2023-10-02T19:42:29.274921737Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:42:29.275501 env[1150]: time="2023-10-02T19:42:29.275480735Z" level=error msg="Failed to pipe stdout of container \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\"" error="reading from a closed fifo" Oct 2 19:42:29.275695 env[1150]: time="2023-10-02T19:42:29.275554003Z" level=error msg="Failed to pipe stderr of container \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\"" error="reading from a closed fifo" Oct 2 19:42:29.278854 env[1150]: time="2023-10-02T19:42:29.278837105Z" level=error msg="StartContainer for \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:29.279062 kubelet[1562]: E1002 19:42:29.278999 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843" Oct 2 19:42:29.279141 kubelet[1562]: E1002 19:42:29.279122 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:29.279141 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:29.279141 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:42:29.279141 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2jgnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:29.279286 kubelet[1562]: E1002 19:42:29.279150 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:42:29.310397 env[1150]: time="2023-10-02T19:42:29.310362889Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:42:29.343977 env[1150]: time="2023-10-02T19:42:29.343933648Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\"" Oct 2 19:42:29.344761 env[1150]: time="2023-10-02T19:42:29.344741731Z" level=info msg="StartContainer for \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\"" Oct 2 19:42:29.356608 systemd[1]: Started cri-containerd-7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d.scope. Oct 2 19:42:29.366877 systemd[1]: cri-containerd-7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d.scope: Deactivated successfully. Oct 2 19:42:29.367066 systemd[1]: Stopped cri-containerd-7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d.scope. Oct 2 19:42:29.371942 env[1150]: time="2023-10-02T19:42:29.371903363Z" level=info msg="shim disconnected" id=7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d Oct 2 19:42:29.371942 env[1150]: time="2023-10-02T19:42:29.371938015Z" level=warning msg="cleaning up after shim disconnected" id=7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d namespace=k8s.io Oct 2 19:42:29.371942 env[1150]: time="2023-10-02T19:42:29.371944723Z" level=info msg="cleaning up dead shim" Oct 2 19:42:29.377326 env[1150]: time="2023-10-02T19:42:29.377278454Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2407 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:29.377712 env[1150]: time="2023-10-02T19:42:29.377657798Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:42:29.378801 env[1150]: time="2023-10-02T19:42:29.378767443Z" level=error msg="Failed to pipe stderr of container \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\"" error="reading from a closed fifo" Oct 2 19:42:29.378843 env[1150]: time="2023-10-02T19:42:29.378816151Z" level=error msg="Failed to pipe stdout of container \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\"" error="reading from a closed fifo" Oct 2 19:42:29.379410 env[1150]: time="2023-10-02T19:42:29.379387637Z" level=error msg="StartContainer for \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:29.379508 kubelet[1562]: E1002 19:42:29.379492 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d" Oct 2 19:42:29.379565 kubelet[1562]: E1002 19:42:29.379552 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:29.379565 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:29.379565 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:42:29.379565 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2jgnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:29.379668 kubelet[1562]: E1002 19:42:29.379573 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:42:30.059859 kubelet[1562]: E1002 19:42:30.059825 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:30.312648 kubelet[1562]: I1002 19:42:30.312585 1562 scope.go:115] "RemoveContainer" containerID="82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843" Oct 2 19:42:30.312959 kubelet[1562]: I1002 19:42:30.312947 1562 scope.go:115] "RemoveContainer" containerID="82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843" Oct 2 19:42:30.313546 env[1150]: time="2023-10-02T19:42:30.313526008Z" level=info msg="RemoveContainer for \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\"" Oct 2 19:42:30.313927 env[1150]: time="2023-10-02T19:42:30.313787938Z" level=info msg="RemoveContainer for \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\"" Oct 2 19:42:30.314067 env[1150]: time="2023-10-02T19:42:30.314048147Z" level=error msg="RemoveContainer for \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\" failed" error="failed to set removing state for container \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\": container is already in removing state" Oct 2 19:42:30.314237 kubelet[1562]: E1002 19:42:30.314220 1562 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\": container is already in removing state" containerID="82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843" Oct 2 19:42:30.314275 kubelet[1562]: E1002 19:42:30.314239 1562 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843": container is already in removing state; Skipping pod "cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)" Oct 2 19:42:30.314381 kubelet[1562]: E1002 19:42:30.314367 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:42:30.389231 env[1150]: time="2023-10-02T19:42:30.389207104Z" level=info msg="RemoveContainer for \"82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843\" returns successfully" Oct 2 19:42:30.391450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905677618.mount: Deactivated successfully. Oct 2 19:42:30.936705 env[1150]: time="2023-10-02T19:42:30.936661277Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:30.937252 env[1150]: time="2023-10-02T19:42:30.937237012Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:30.938239 env[1150]: time="2023-10-02T19:42:30.938217889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:30.938696 env[1150]: time="2023-10-02T19:42:30.938669248Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 19:42:30.939887 env[1150]: time="2023-10-02T19:42:30.939866858Z" level=info msg="CreateContainer within sandbox \"a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:42:30.946240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836964842.mount: Deactivated successfully. Oct 2 19:42:30.948973 env[1150]: time="2023-10-02T19:42:30.948950462Z" level=info msg="CreateContainer within sandbox \"a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\"" Oct 2 19:42:30.949584 env[1150]: time="2023-10-02T19:42:30.949559968Z" level=info msg="StartContainer for \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\"" Oct 2 19:42:30.959195 systemd[1]: Started cri-containerd-b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f.scope. Oct 2 19:42:30.981677 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:42:30.981749 kernel: audit: type=1400 audit(1696275750.970:726): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.981769 kernel: audit: type=1400 audit(1696275750.970:727): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.985795 kernel: audit: type=1400 audit(1696275750.970:728): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.985831 kernel: audit: type=1400 audit(1696275750.970:729): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.990216 kernel: audit: type=1400 audit(1696275750.970:730): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.990249 kernel: audit: type=1400 audit(1696275750.970:731): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.994958 kernel: audit: type=1400 audit(1696275750.970:732): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.998027 kernel: audit: type=1400 audit(1696275750.970:733): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:31.001033 kernel: audit: type=1400 audit(1696275750.970:734): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.980000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.980000 audit: BPF prog-id=90 op=LOAD Oct 2 19:42:30.981000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.981000 audit[2428]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2276 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:30.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663866633632633663666332663239626337316638306130646561 Oct 2 19:42:31.004054 kernel: audit: type=1400 audit(1696275750.980:735): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2276 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:30.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663866633632633663666332663239626337316638306130646561 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.982000 audit: BPF prog-id=91 op=LOAD Oct 2 19:42:30.982000 audit[2428]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000303210 items=0 ppid=2276 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:30.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663866633632633663666332663239626337316638306130646561 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit: BPF prog-id=92 op=LOAD Oct 2 19:42:30.986000 audit[2428]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000303258 items=0 ppid=2276 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:30.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663866633632633663666332663239626337316638306130646561 Oct 2 19:42:30.986000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:42:30.986000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { perfmon } for pid=2428 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit[2428]: AVC avc: denied { bpf } for pid=2428 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:30.986000 audit: BPF prog-id=93 op=LOAD Oct 2 19:42:30.986000 audit[2428]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000303668 items=0 ppid=2276 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:30.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663866633632633663666332663239626337316638306130646561 Oct 2 19:42:31.043000 audit[2439]: AVC avc: denied { map_create } for pid=2439 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c284,c477 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c284,c477 tclass=bpf permissive=0 Oct 2 19:42:31.043000 audit[2439]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00051f7d0 a2=48 a3=c00051f7c0 items=0 ppid=2276 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c284,c477 key=(null) Oct 2 19:42:31.043000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:42:31.054659 env[1150]: time="2023-10-02T19:42:31.054632304Z" level=info msg="StartContainer for \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\" returns successfully" Oct 2 19:42:31.060051 kubelet[1562]: E1002 19:42:31.060026 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:31.314834 kubelet[1562]: E1002 19:42:31.314758 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:42:32.017800 kubelet[1562]: E1002 19:42:32.017767 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:32.061157 kubelet[1562]: E1002 19:42:32.061116 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:32.375438 kubelet[1562]: W1002 19:42:32.375358 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a61d5b_5b9e_4cb9_9bcc_8f9a53af5f70.slice/cri-containerd-82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843.scope WatchSource:0}: container "82e7a0a9de11d6e15b53082e1296d40995ea9a194df9a2208e450d8717bf7843" in namespace "k8s.io": not found Oct 2 19:42:33.061436 kubelet[1562]: E1002 19:42:33.061403 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:34.062104 kubelet[1562]: E1002 19:42:34.062071 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:35.062675 kubelet[1562]: E1002 19:42:35.062649 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:35.483150 kubelet[1562]: W1002 19:42:35.483128 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a61d5b_5b9e_4cb9_9bcc_8f9a53af5f70.slice/cri-containerd-7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d.scope WatchSource:0}: task 7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d not found: not found Oct 2 19:42:36.063725 kubelet[1562]: E1002 19:42:36.063699 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:37.019047 kubelet[1562]: E1002 19:42:37.019027 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:37.065708 kubelet[1562]: E1002 19:42:37.065687 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:38.066297 kubelet[1562]: E1002 19:42:38.066269 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:39.067192 kubelet[1562]: E1002 19:42:39.067172 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:40.068240 kubelet[1562]: E1002 19:42:40.068207 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:41.068648 kubelet[1562]: E1002 19:42:41.068625 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:42.020357 kubelet[1562]: E1002 19:42:42.020338 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:42.069048 kubelet[1562]: E1002 19:42:42.069021 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:43.069927 kubelet[1562]: E1002 19:42:43.069907 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:44.070995 kubelet[1562]: E1002 19:42:44.070954 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:45.071634 kubelet[1562]: E1002 19:42:45.071584 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:46.064489 env[1150]: time="2023-10-02T19:42:46.064421405Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:42:46.071730 kubelet[1562]: E1002 19:42:46.071713 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:46.095347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969165998.mount: Deactivated successfully. Oct 2 19:42:46.098944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707682070.mount: Deactivated successfully. Oct 2 19:42:46.100528 env[1150]: time="2023-10-02T19:42:46.100503959Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\"" Oct 2 19:42:46.101038 env[1150]: time="2023-10-02T19:42:46.101022880Z" level=info msg="StartContainer for \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\"" Oct 2 19:42:46.114406 systemd[1]: Started cri-containerd-49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb.scope. Oct 2 19:42:46.125243 systemd[1]: cri-containerd-49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb.scope: Deactivated successfully. Oct 2 19:42:46.125390 systemd[1]: Stopped cri-containerd-49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb.scope. Oct 2 19:42:46.129845 env[1150]: time="2023-10-02T19:42:46.129813137Z" level=info msg="shim disconnected" id=49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb Oct 2 19:42:46.129973 env[1150]: time="2023-10-02T19:42:46.129963056Z" level=warning msg="cleaning up after shim disconnected" id=49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb namespace=k8s.io Oct 2 19:42:46.130072 env[1150]: time="2023-10-02T19:42:46.130062743Z" level=info msg="cleaning up dead shim" Oct 2 19:42:46.134416 env[1150]: time="2023-10-02T19:42:46.134393727Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2482 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:46.134553 env[1150]: time="2023-10-02T19:42:46.134516208Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:42:46.134651 env[1150]: time="2023-10-02T19:42:46.134631853Z" level=error msg="Failed to pipe stdout of container \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\"" error="reading from a closed fifo" Oct 2 19:42:46.134726 env[1150]: time="2023-10-02T19:42:46.134707812Z" level=error msg="Failed to pipe stderr of container \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\"" error="reading from a closed fifo" Oct 2 19:42:46.135239 env[1150]: time="2023-10-02T19:42:46.135221320Z" level=error msg="StartContainer for \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:46.135658 kubelet[1562]: E1002 19:42:46.135379 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb" Oct 2 19:42:46.135658 kubelet[1562]: E1002 19:42:46.135439 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:46.135658 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:46.135658 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:42:46.135794 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2jgnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:46.135852 kubelet[1562]: E1002 19:42:46.135464 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:42:46.334050 kubelet[1562]: I1002 19:42:46.333220 1562 scope.go:115] "RemoveContainer" containerID="7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d" Oct 2 19:42:46.334050 kubelet[1562]: I1002 19:42:46.333454 1562 scope.go:115] "RemoveContainer" containerID="7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d" Oct 2 19:42:46.335143 env[1150]: time="2023-10-02T19:42:46.335062531Z" level=info msg="RemoveContainer for \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\"" Oct 2 19:42:46.335726 env[1150]: time="2023-10-02T19:42:46.335703209Z" level=info msg="RemoveContainer for \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\"" Oct 2 19:42:46.335788 env[1150]: time="2023-10-02T19:42:46.335759724Z" level=error msg="RemoveContainer for \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\" failed" error="failed to set removing state for container \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\": container is already in removing state" Oct 2 19:42:46.335927 kubelet[1562]: E1002 19:42:46.335911 1562 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\": container is already in removing state" containerID="7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d" Oct 2 19:42:46.335977 kubelet[1562]: E1002 19:42:46.335936 1562 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d": container is already in removing state; Skipping pod "cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)" Oct 2 19:42:46.336146 kubelet[1562]: E1002 19:42:46.336132 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:42:46.337192 env[1150]: time="2023-10-02T19:42:46.337168134Z" level=info msg="RemoveContainer for \"7dce4e1e92a4f24ad758b3b134892e16bcfd41c9b48558bc84873d110b99786d\" returns successfully" Oct 2 19:42:46.913871 kubelet[1562]: E1002 19:42:46.913849 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:47.021627 kubelet[1562]: E1002 19:42:47.021610 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:47.072691 kubelet[1562]: E1002 19:42:47.072665 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:47.093496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb-rootfs.mount: Deactivated successfully. Oct 2 19:42:48.073912 kubelet[1562]: E1002 19:42:48.073848 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:49.074965 kubelet[1562]: E1002 19:42:49.074944 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:49.234630 kubelet[1562]: W1002 19:42:49.234591 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a61d5b_5b9e_4cb9_9bcc_8f9a53af5f70.slice/cri-containerd-49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb.scope WatchSource:0}: task 49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb not found: not found Oct 2 19:42:50.075783 kubelet[1562]: E1002 19:42:50.075756 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:51.076579 kubelet[1562]: E1002 19:42:51.076560 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:52.022596 kubelet[1562]: E1002 19:42:52.022433 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:52.078031 kubelet[1562]: E1002 19:42:52.078000 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:53.078304 kubelet[1562]: E1002 19:42:53.078266 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:54.078818 kubelet[1562]: E1002 19:42:54.078785 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:55.079770 kubelet[1562]: E1002 19:42:55.079748 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:56.081194 kubelet[1562]: E1002 19:42:56.081158 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:57.022849 kubelet[1562]: E1002 19:42:57.022818 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:57.081840 kubelet[1562]: E1002 19:42:57.081774 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:58.082005 kubelet[1562]: E1002 19:42:58.081976 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:59.083052 kubelet[1562]: E1002 19:42:59.083024 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:00.084148 kubelet[1562]: E1002 19:43:00.084116 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:01.084907 kubelet[1562]: E1002 19:43:01.084875 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:02.024311 kubelet[1562]: E1002 19:43:02.024281 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:02.062666 kubelet[1562]: E1002 19:43:02.062420 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:43:02.085699 kubelet[1562]: E1002 19:43:02.085669 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:03.085969 kubelet[1562]: E1002 19:43:03.085942 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:04.087073 kubelet[1562]: E1002 19:43:04.087044 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:05.088187 kubelet[1562]: E1002 19:43:05.088139 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:06.089250 kubelet[1562]: E1002 19:43:06.089217 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:06.913690 kubelet[1562]: E1002 19:43:06.913661 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:06.944067 env[1150]: time="2023-10-02T19:43:06.944031382Z" level=info msg="StopPodSandbox for \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\"" Oct 2 19:43:06.944331 env[1150]: time="2023-10-02T19:43:06.944101140Z" level=info msg="TearDown network for sandbox \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" successfully" Oct 2 19:43:06.944331 env[1150]: time="2023-10-02T19:43:06.944130111Z" level=info msg="StopPodSandbox for \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" returns successfully" Oct 2 19:43:06.945291 env[1150]: time="2023-10-02T19:43:06.944499959Z" level=info msg="RemovePodSandbox for \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\"" Oct 2 19:43:06.945291 env[1150]: time="2023-10-02T19:43:06.944522132Z" level=info msg="Forcibly stopping sandbox \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\"" Oct 2 19:43:06.945291 env[1150]: time="2023-10-02T19:43:06.944570253Z" level=info msg="TearDown network for sandbox \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" successfully" Oct 2 19:43:06.945819 env[1150]: time="2023-10-02T19:43:06.945801408Z" level=info msg="RemovePodSandbox \"7d8c82d27d3654c70bb7879cb278b20136ac819142f1836b3e68f2692fd77074\" returns successfully" Oct 2 19:43:06.946167 env[1150]: time="2023-10-02T19:43:06.946144294Z" level=info msg="StopPodSandbox for \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\"" Oct 2 19:43:06.946243 env[1150]: time="2023-10-02T19:43:06.946207546Z" level=info msg="TearDown network for sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" successfully" Oct 2 19:43:06.946243 env[1150]: time="2023-10-02T19:43:06.946238674Z" level=info msg="StopPodSandbox for \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" returns successfully" Oct 2 19:43:06.946639 env[1150]: time="2023-10-02T19:43:06.946618243Z" level=info msg="RemovePodSandbox for \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\"" Oct 2 19:43:06.946724 env[1150]: time="2023-10-02T19:43:06.946697766Z" level=info msg="Forcibly stopping sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\"" Oct 2 19:43:06.946814 env[1150]: time="2023-10-02T19:43:06.946799274Z" level=info msg="TearDown network for sandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" successfully" Oct 2 19:43:06.947884 env[1150]: time="2023-10-02T19:43:06.947866785Z" level=info msg="RemovePodSandbox \"d11a7a3df16c6e9c25a2fa476bd1fbdd3400315060098c77f92431c8eca2b462\" returns successfully" Oct 2 19:43:07.025002 kubelet[1562]: E1002 19:43:07.024986 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:07.089496 kubelet[1562]: E1002 19:43:07.089468 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:08.090155 kubelet[1562]: E1002 19:43:08.090126 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:09.090908 kubelet[1562]: E1002 19:43:09.090874 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:10.091163 kubelet[1562]: E1002 19:43:10.091134 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:11.092112 kubelet[1562]: E1002 19:43:11.092090 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:12.026095 kubelet[1562]: E1002 19:43:12.026059 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:12.093485 kubelet[1562]: E1002 19:43:12.093462 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:13.065304 env[1150]: time="2023-10-02T19:43:13.065147154Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:43:13.094870 kubelet[1562]: E1002 19:43:13.094827 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:13.115748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729856883.mount: Deactivated successfully. Oct 2 19:43:13.118061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037655174.mount: Deactivated successfully. Oct 2 19:43:13.120202 env[1150]: time="2023-10-02T19:43:13.120176685Z" level=info msg="CreateContainer within sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\"" Oct 2 19:43:13.120530 env[1150]: time="2023-10-02T19:43:13.120514523Z" level=info msg="StartContainer for \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\"" Oct 2 19:43:13.132749 systemd[1]: Started cri-containerd-40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4.scope. Oct 2 19:43:13.142710 systemd[1]: cri-containerd-40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4.scope: Deactivated successfully. Oct 2 19:43:13.142877 systemd[1]: Stopped cri-containerd-40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4.scope. Oct 2 19:43:13.147870 env[1150]: time="2023-10-02T19:43:13.147827034Z" level=info msg="shim disconnected" id=40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4 Oct 2 19:43:13.148006 env[1150]: time="2023-10-02T19:43:13.147994740Z" level=warning msg="cleaning up after shim disconnected" id=40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4 namespace=k8s.io Oct 2 19:43:13.148091 env[1150]: time="2023-10-02T19:43:13.148081693Z" level=info msg="cleaning up dead shim" Oct 2 19:43:13.153283 env[1150]: time="2023-10-02T19:43:13.153255847Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2526 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:13Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:13.153524 env[1150]: time="2023-10-02T19:43:13.153492662Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:43:13.154066 env[1150]: time="2023-10-02T19:43:13.154041984Z" level=error msg="Failed to pipe stdout of container \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\"" error="reading from a closed fifo" Oct 2 19:43:13.154136 env[1150]: time="2023-10-02T19:43:13.154119030Z" level=error msg="Failed to pipe stderr of container \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\"" error="reading from a closed fifo" Oct 2 19:43:13.154663 env[1150]: time="2023-10-02T19:43:13.154647312Z" level=error msg="StartContainer for \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:13.154844 kubelet[1562]: E1002 19:43:13.154827 1562 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4" Oct 2 19:43:13.154920 kubelet[1562]: E1002 19:43:13.154909 1562 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:13.154920 kubelet[1562]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:13.154920 kubelet[1562]: rm /hostbin/cilium-mount Oct 2 19:43:13.154920 kubelet[1562]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2jgnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:13.155070 kubelet[1562]: E1002 19:43:13.154934 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:43:13.368719 kubelet[1562]: I1002 19:43:13.368579 1562 scope.go:115] "RemoveContainer" containerID="49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb" Oct 2 19:43:13.370064 kubelet[1562]: I1002 19:43:13.370046 1562 scope.go:115] "RemoveContainer" containerID="49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb" Oct 2 19:43:13.370982 env[1150]: time="2023-10-02T19:43:13.370944080Z" level=info msg="RemoveContainer for \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\"" Oct 2 19:43:13.372682 env[1150]: time="2023-10-02T19:43:13.372666720Z" level=info msg="RemoveContainer for \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\"" Oct 2 19:43:13.372782 env[1150]: time="2023-10-02T19:43:13.372763142Z" level=error msg="RemoveContainer for \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\" failed" error="rpc error: code = NotFound desc = get container info: container \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\" in namespace \"k8s.io\": not found" Oct 2 19:43:13.372915 kubelet[1562]: E1002 19:43:13.372903 1562 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\" in namespace \"k8s.io\": not found" containerID="49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb" Oct 2 19:43:13.372956 kubelet[1562]: E1002 19:43:13.372926 1562 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb" in namespace "k8s.io": not found; Skipping pod "cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)" Oct 2 19:43:13.373080 kubelet[1562]: E1002 19:43:13.373069 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:43:13.373459 env[1150]: time="2023-10-02T19:43:13.373442604Z" level=info msg="RemoveContainer for \"49b56800b4918944c0a1072bcfa301a6b58e58be619dbae4984ace12c3aeadbb\" returns successfully" Oct 2 19:43:14.095426 kubelet[1562]: E1002 19:43:14.095387 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:14.113978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4-rootfs.mount: Deactivated successfully. Oct 2 19:43:15.095685 kubelet[1562]: E1002 19:43:15.095651 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:16.096335 kubelet[1562]: E1002 19:43:16.096309 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:16.251688 kubelet[1562]: W1002 19:43:16.251632 1562 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a61d5b_5b9e_4cb9_9bcc_8f9a53af5f70.slice/cri-containerd-40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4.scope WatchSource:0}: task 40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4 not found: not found Oct 2 19:43:17.027181 kubelet[1562]: E1002 19:43:17.027151 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:17.097670 kubelet[1562]: E1002 19:43:17.097647 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:18.098346 kubelet[1562]: E1002 19:43:18.098319 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:19.099035 kubelet[1562]: E1002 19:43:19.098999 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:20.099912 kubelet[1562]: E1002 19:43:20.099883 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:21.100844 kubelet[1562]: E1002 19:43:21.100802 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:22.027882 kubelet[1562]: E1002 19:43:22.027822 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:22.101863 kubelet[1562]: E1002 19:43:22.101836 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:23.103022 kubelet[1562]: E1002 19:43:23.102985 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:24.104032 kubelet[1562]: E1002 19:43:24.103997 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:25.104565 kubelet[1562]: E1002 19:43:25.104537 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:26.105192 kubelet[1562]: E1002 19:43:26.105167 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:26.913579 kubelet[1562]: E1002 19:43:26.913550 1562 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:27.028444 kubelet[1562]: E1002 19:43:27.028415 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:27.106096 kubelet[1562]: E1002 19:43:27.106073 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:28.063154 kubelet[1562]: E1002 19:43:28.063132 1562 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-krrnm_kube-system(86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70)\"" pod="kube-system/cilium-krrnm" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 Oct 2 19:43:28.106935 kubelet[1562]: E1002 19:43:28.106907 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.107331 kubelet[1562]: E1002 19:43:29.107267 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.873624 env[1150]: time="2023-10-02T19:43:29.873593163Z" level=info msg="StopPodSandbox for \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\"" Oct 2 19:43:29.873975 env[1150]: time="2023-10-02T19:43:29.873961675Z" level=info msg="Container to stop \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:43:29.875241 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759-shm.mount: Deactivated successfully. Oct 2 19:43:29.879832 systemd[1]: cri-containerd-128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759.scope: Deactivated successfully. Oct 2 19:43:29.884699 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:43:29.884772 kernel: audit: type=1334 audit(1696275809.878:745): prog-id=82 op=UNLOAD Oct 2 19:43:29.878000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:43:29.884000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:43:29.886118 kernel: audit: type=1334 audit(1696275809.884:746): prog-id=85 op=UNLOAD Oct 2 19:43:29.894187 env[1150]: time="2023-10-02T19:43:29.894162734Z" level=info msg="StopContainer for \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\" with timeout 30 (s)" Oct 2 19:43:29.894536 env[1150]: time="2023-10-02T19:43:29.894516768Z" level=info msg="Stop container \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\" with signal terminated" Oct 2 19:43:29.897753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759-rootfs.mount: Deactivated successfully. Oct 2 19:43:29.904699 systemd[1]: cri-containerd-b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f.scope: Deactivated successfully. Oct 2 19:43:29.906784 kernel: audit: type=1334 audit(1696275809.903:747): prog-id=90 op=UNLOAD Oct 2 19:43:29.909095 kernel: audit: type=1334 audit(1696275809.907:748): prog-id=93 op=UNLOAD Oct 2 19:43:29.903000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:43:29.907000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:43:29.918610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f-rootfs.mount: Deactivated successfully. Oct 2 19:43:29.934131 env[1150]: time="2023-10-02T19:43:29.934095274Z" level=info msg="shim disconnected" id=b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f Oct 2 19:43:29.934476 env[1150]: time="2023-10-02T19:43:29.934464464Z" level=warning msg="cleaning up after shim disconnected" id=b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f namespace=k8s.io Oct 2 19:43:29.934529 env[1150]: time="2023-10-02T19:43:29.934518866Z" level=info msg="cleaning up dead shim" Oct 2 19:43:29.934665 env[1150]: time="2023-10-02T19:43:29.934167918Z" level=info msg="shim disconnected" id=128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759 Oct 2 19:43:29.934710 env[1150]: time="2023-10-02T19:43:29.934663777Z" level=warning msg="cleaning up after shim disconnected" id=128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759 namespace=k8s.io Oct 2 19:43:29.934710 env[1150]: time="2023-10-02T19:43:29.934670048Z" level=info msg="cleaning up dead shim" Oct 2 19:43:29.940880 env[1150]: time="2023-10-02T19:43:29.940852899Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2579 runtime=io.containerd.runc.v2\n" Oct 2 19:43:29.941054 env[1150]: time="2023-10-02T19:43:29.941035762Z" level=info msg="TearDown network for sandbox \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" successfully" Oct 2 19:43:29.941054 env[1150]: time="2023-10-02T19:43:29.941052521Z" level=info msg="StopPodSandbox for \"128d870941d6ff14b6abdf8f6a519da40cd59de78e8acf326ed5b79d6f8b8759\" returns successfully" Oct 2 19:43:29.943700 env[1150]: time="2023-10-02T19:43:29.943679141Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2578 runtime=io.containerd.runc.v2\n" Oct 2 19:43:29.944353 env[1150]: time="2023-10-02T19:43:29.944334432Z" level=info msg="StopContainer for \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\" returns successfully" Oct 2 19:43:29.944558 env[1150]: time="2023-10-02T19:43:29.944529830Z" level=info msg="StopPodSandbox for \"a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6\"" Oct 2 19:43:29.944595 env[1150]: time="2023-10-02T19:43:29.944569932Z" level=info msg="Container to stop \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:43:29.945338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6-shm.mount: Deactivated successfully. Oct 2 19:43:29.949546 systemd[1]: cri-containerd-a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6.scope: Deactivated successfully. Oct 2 19:43:29.948000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:43:29.951041 kernel: audit: type=1334 audit(1696275809.948:749): prog-id=86 op=UNLOAD Oct 2 19:43:29.952000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:43:29.955074 kernel: audit: type=1334 audit(1696275809.952:750): prog-id=89 op=UNLOAD Oct 2 19:43:29.964056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6-rootfs.mount: Deactivated successfully. Oct 2 19:43:29.967058 env[1150]: time="2023-10-02T19:43:29.967027174Z" level=info msg="shim disconnected" id=a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6 Oct 2 19:43:29.967058 env[1150]: time="2023-10-02T19:43:29.967056220Z" level=warning msg="cleaning up after shim disconnected" id=a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6 namespace=k8s.io Oct 2 19:43:29.967159 env[1150]: time="2023-10-02T19:43:29.967062125Z" level=info msg="cleaning up dead shim" Oct 2 19:43:29.971802 env[1150]: time="2023-10-02T19:43:29.971780197Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2622 runtime=io.containerd.runc.v2\n" Oct 2 19:43:29.972072 env[1150]: time="2023-10-02T19:43:29.972056624Z" level=info msg="TearDown network for sandbox \"a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6\" successfully" Oct 2 19:43:29.972134 env[1150]: time="2023-10-02T19:43:29.972120762Z" level=info msg="StopPodSandbox for \"a99c3d57b0820e1a65542fe1a14eadaba0088a92b955aab491921dc35ee4d0a6\" returns successfully" Oct 2 19:43:30.084161 kubelet[1562]: I1002 19:43:30.083473 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hostproc" (OuterVolumeSpecName: "hostproc") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084161 kubelet[1562]: I1002 19:43:30.083503 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hostproc\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084161 kubelet[1562]: I1002 19:43:30.083521 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-cgroup\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084161 kubelet[1562]: I1002 19:43:30.083535 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-net\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084161 kubelet[1562]: I1002 19:43:30.083552 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-xtables-lock\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084161 kubelet[1562]: I1002 19:43:30.083571 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-cilium-config-path\") pod \"451fba2f-69f4-4b14-a66d-88bc5b3e2c72\" (UID: \"451fba2f-69f4-4b14-a66d-88bc5b3e2c72\") " Oct 2 19:43:30.084423 kubelet[1562]: I1002 19:43:30.083583 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-bpf-maps\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084423 kubelet[1562]: I1002 19:43:30.083597 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cni-path\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084423 kubelet[1562]: I1002 19:43:30.083612 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-clustermesh-secrets\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084423 kubelet[1562]: I1002 19:43:30.083626 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-kernel\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084423 kubelet[1562]: I1002 19:43:30.083642 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-ipsec-secrets\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084423 kubelet[1562]: I1002 19:43:30.083658 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzwnh\" (UniqueName: \"kubernetes.io/projected/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-kube-api-access-gzwnh\") pod \"451fba2f-69f4-4b14-a66d-88bc5b3e2c72\" (UID: \"451fba2f-69f4-4b14-a66d-88bc5b3e2c72\") " Oct 2 19:43:30.084585 kubelet[1562]: I1002 19:43:30.083671 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-etc-cni-netd\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084585 kubelet[1562]: I1002 19:43:30.083684 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-run\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084585 kubelet[1562]: I1002 19:43:30.083702 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-config-path\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084585 kubelet[1562]: I1002 19:43:30.083716 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jgnd\" (UniqueName: \"kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-kube-api-access-2jgnd\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084585 kubelet[1562]: I1002 19:43:30.083729 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-lib-modules\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084585 kubelet[1562]: I1002 19:43:30.083742 1562 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hubble-tls\") pod \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\" (UID: \"86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70\") " Oct 2 19:43:30.084738 kubelet[1562]: I1002 19:43:30.083764 1562 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hostproc\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.084738 kubelet[1562]: I1002 19:43:30.083988 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084738 kubelet[1562]: I1002 19:43:30.084005 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084738 kubelet[1562]: I1002 19:43:30.084036 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084738 kubelet[1562]: I1002 19:43:30.084048 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084867 kubelet[1562]: I1002 19:43:30.084288 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084867 kubelet[1562]: I1002 19:43:30.084311 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cni-path" (OuterVolumeSpecName: "cni-path") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084867 kubelet[1562]: I1002 19:43:30.084479 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084867 kubelet[1562]: I1002 19:43:30.084757 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.084975 kubelet[1562]: W1002 19:43:30.084948 1562 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:43:30.085188 kubelet[1562]: W1002 19:43:30.085081 1562 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/451fba2f-69f4-4b14-a66d-88bc5b3e2c72/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:43:30.085237 kubelet[1562]: I1002 19:43:30.085201 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:30.086457 kubelet[1562]: I1002 19:43:30.086434 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:43:30.090741 kubelet[1562]: I1002 19:43:30.090719 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "451fba2f-69f4-4b14-a66d-88bc5b3e2c72" (UID: "451fba2f-69f4-4b14-a66d-88bc5b3e2c72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:43:30.090802 kubelet[1562]: I1002 19:43:30.090773 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:30.090839 kubelet[1562]: I1002 19:43:30.090814 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-kube-api-access-gzwnh" (OuterVolumeSpecName: "kube-api-access-gzwnh") pod "451fba2f-69f4-4b14-a66d-88bc5b3e2c72" (UID: "451fba2f-69f4-4b14-a66d-88bc5b3e2c72"). InnerVolumeSpecName "kube-api-access-gzwnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:30.091613 kubelet[1562]: I1002 19:43:30.091597 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:43:30.091945 kubelet[1562]: I1002 19:43:30.091930 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:43:30.092631 kubelet[1562]: I1002 19:43:30.092604 1562 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-kube-api-access-2jgnd" (OuterVolumeSpecName: "kube-api-access-2jgnd") pod "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70" (UID: "86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70"). InnerVolumeSpecName "kube-api-access-2jgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:30.107728 kubelet[1562]: E1002 19:43:30.107709 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:30.184077 kubelet[1562]: I1002 19:43:30.184054 1562 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-lib-modules\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184211 kubelet[1562]: I1002 19:43:30.184203 1562 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-hubble-tls\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184263 kubelet[1562]: I1002 19:43:30.184255 1562 reconciler.go:399] "Volume detached for volume \"kube-api-access-2jgnd\" (UniqueName: \"kubernetes.io/projected/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-kube-api-access-2jgnd\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184311 kubelet[1562]: I1002 19:43:30.184305 1562 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-net\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184357 kubelet[1562]: I1002 19:43:30.184351 1562 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-xtables-lock\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184407 kubelet[1562]: I1002 19:43:30.184401 1562 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-cgroup\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184453 kubelet[1562]: I1002 19:43:30.184447 1562 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-host-proc-sys-kernel\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184500 kubelet[1562]: I1002 19:43:30.184494 1562 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-ipsec-secrets\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184547 kubelet[1562]: I1002 19:43:30.184541 1562 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-cilium-config-path\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184592 kubelet[1562]: I1002 19:43:30.184586 1562 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-bpf-maps\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184640 kubelet[1562]: I1002 19:43:30.184633 1562 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cni-path\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184685 kubelet[1562]: I1002 19:43:30.184679 1562 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-clustermesh-secrets\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184730 kubelet[1562]: I1002 19:43:30.184724 1562 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-etc-cni-netd\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184776 kubelet[1562]: I1002 19:43:30.184770 1562 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-run\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184823 kubelet[1562]: I1002 19:43:30.184817 1562 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70-cilium-config-path\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.184866 kubelet[1562]: I1002 19:43:30.184860 1562 reconciler.go:399] "Volume detached for volume \"kube-api-access-gzwnh\" (UniqueName: \"kubernetes.io/projected/451fba2f-69f4-4b14-a66d-88bc5b3e2c72-kube-api-access-gzwnh\") on node \"10.67.124.141\" DevicePath \"\"" Oct 2 19:43:30.391229 kubelet[1562]: I1002 19:43:30.391210 1562 scope.go:115] "RemoveContainer" containerID="40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4" Oct 2 19:43:30.393825 systemd[1]: Removed slice kubepods-burstable-pod86a61d5b_5b9e_4cb9_9bcc_8f9a53af5f70.slice. Oct 2 19:43:30.396435 env[1150]: time="2023-10-02T19:43:30.396222733Z" level=info msg="RemoveContainer for \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\"" Oct 2 19:43:30.399356 systemd[1]: Removed slice kubepods-besteffort-pod451fba2f_69f4_4b14_a66d_88bc5b3e2c72.slice. Oct 2 19:43:30.402699 env[1150]: time="2023-10-02T19:43:30.402611196Z" level=info msg="RemoveContainer for \"40bd1fa1f58068f5b1a04665788383513a5d7c5da6528a97c0dc11ad6bf6caf4\" returns successfully" Oct 2 19:43:30.402862 kubelet[1562]: I1002 19:43:30.402849 1562 scope.go:115] "RemoveContainer" containerID="b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f" Oct 2 19:43:30.404602 env[1150]: time="2023-10-02T19:43:30.404374356Z" level=info msg="RemoveContainer for \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\"" Oct 2 19:43:30.412155 env[1150]: time="2023-10-02T19:43:30.412036660Z" level=info msg="RemoveContainer for \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\" returns successfully" Oct 2 19:43:30.412251 kubelet[1562]: I1002 19:43:30.412214 1562 scope.go:115] "RemoveContainer" containerID="b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f" Oct 2 19:43:30.412435 env[1150]: time="2023-10-02T19:43:30.412384232Z" level=error msg="ContainerStatus for \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\": not found" Oct 2 19:43:30.412567 kubelet[1562]: E1002 19:43:30.412555 1562 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\": not found" containerID="b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f" Oct 2 19:43:30.412653 kubelet[1562]: I1002 19:43:30.412643 1562 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f} err="failed to get container status \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6f8fc62c6cfc2f29bc71f80a0dea883ddb098dd0aeae000d71e6d6941164a5f\": not found" Oct 2 19:43:30.875105 systemd[1]: var-lib-kubelet-pods-451fba2f\x2d69f4\x2d4b14\x2da66d\x2d88bc5b3e2c72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgzwnh.mount: Deactivated successfully. Oct 2 19:43:30.875195 systemd[1]: var-lib-kubelet-pods-86a61d5b\x2d5b9e\x2d4cb9\x2d9bcc\x2d8f9a53af5f70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jgnd.mount: Deactivated successfully. Oct 2 19:43:30.875239 systemd[1]: var-lib-kubelet-pods-86a61d5b\x2d5b9e\x2d4cb9\x2d9bcc\x2d8f9a53af5f70-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:43:30.875280 systemd[1]: var-lib-kubelet-pods-86a61d5b\x2d5b9e\x2d4cb9\x2d9bcc\x2d8f9a53af5f70-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:43:30.875320 systemd[1]: var-lib-kubelet-pods-86a61d5b\x2d5b9e\x2d4cb9\x2d9bcc\x2d8f9a53af5f70-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:43:31.063756 kubelet[1562]: I1002 19:43:31.063741 1562 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=451fba2f-69f4-4b14-a66d-88bc5b3e2c72 path="/var/lib/kubelet/pods/451fba2f-69f4-4b14-a66d-88bc5b3e2c72/volumes" Oct 2 19:43:31.064134 kubelet[1562]: I1002 19:43:31.064126 1562 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70 path="/var/lib/kubelet/pods/86a61d5b-5b9e-4cb9-9bcc-8f9a53af5f70/volumes" Oct 2 19:43:31.108229 kubelet[1562]: E1002 19:43:31.108208 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:32.029615 kubelet[1562]: E1002 19:43:32.029591 1562 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:32.109531 kubelet[1562]: E1002 19:43:32.109491 1562 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"