Jun 25 16:20:27.713995 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:20:27.714018 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:20:27.714026 kernel: Disabled fast string operations Jun 25 16:20:27.714030 kernel: BIOS-provided physical RAM map: Jun 25 16:20:27.714034 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jun 25 16:20:27.714038 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jun 25 16:20:27.714045 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jun 25 16:20:27.714049 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jun 25 16:20:27.714053 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jun 25 16:20:27.714057 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jun 25 16:20:27.714061 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jun 25 16:20:27.714065 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jun 25 16:20:27.714069 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jun 25 16:20:27.714073 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jun 25 16:20:27.714079 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jun 25 16:20:27.714084 kernel: NX (Execute Disable) protection: active Jun 25 16:20:27.714088 kernel: SMBIOS 2.7 present. Jun 25 16:20:27.714093 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jun 25 16:20:27.714097 kernel: vmware: hypercall mode: 0x00 Jun 25 16:20:27.714102 kernel: Hypervisor detected: VMware Jun 25 16:20:27.714106 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jun 25 16:20:27.714111 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jun 25 16:20:27.714116 kernel: vmware: using clock offset of 4811066186 ns Jun 25 16:20:27.714120 kernel: tsc: Detected 3408.000 MHz processor Jun 25 16:20:27.714125 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:20:27.714130 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:20:27.714135 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jun 25 16:20:27.714139 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:20:27.714144 kernel: total RAM covered: 3072M Jun 25 16:20:27.714148 kernel: Found optimal setting for mtrr clean up Jun 25 16:20:27.714154 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jun 25 16:20:27.714159 kernel: Using GB pages for direct mapping Jun 25 16:20:27.714164 kernel: ACPI: Early table checksum verification disabled Jun 25 16:20:27.714169 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jun 25 16:20:27.714173 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jun 25 16:20:27.714178 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jun 25 16:20:27.714182 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jun 25 16:20:27.714187 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 25 16:20:27.714192 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 25 16:20:27.714197 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jun 25 16:20:27.714204 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jun 25 16:20:27.714209 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jun 25 16:20:27.714214 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jun 25 16:20:27.714219 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jun 25 16:20:27.714224 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jun 25 16:20:27.714231 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jun 25 16:20:27.714236 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jun 25 16:20:27.714241 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 25 16:20:27.714246 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 25 16:20:27.714251 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jun 25 16:20:27.714256 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jun 25 16:20:27.714261 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jun 25 16:20:27.714266 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jun 25 16:20:27.714270 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jun 25 16:20:27.714276 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jun 25 16:20:27.714281 kernel: system APIC only can use physical flat Jun 25 16:20:27.714286 kernel: Setting APIC routing to physical flat. Jun 25 16:20:27.714291 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:20:27.714296 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jun 25 16:20:27.714301 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jun 25 16:20:27.714306 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jun 25 16:20:27.714311 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jun 25 16:20:27.714316 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jun 25 16:20:27.714321 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jun 25 16:20:27.714327 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jun 25 16:20:27.714332 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jun 25 16:20:27.714337 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jun 25 16:20:27.714341 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jun 25 16:20:27.714346 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jun 25 16:20:27.714351 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jun 25 16:20:27.714356 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jun 25 16:20:27.714367 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jun 25 16:20:27.714372 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jun 25 16:20:27.714377 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jun 25 16:20:27.714384 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jun 25 16:20:27.714389 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jun 25 16:20:27.714393 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jun 25 16:20:27.714398 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jun 25 16:20:27.714403 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jun 25 16:20:27.714408 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jun 25 16:20:27.714413 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jun 25 16:20:27.714418 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jun 25 16:20:27.714423 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jun 25 16:20:27.714427 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jun 25 16:20:27.714432 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jun 25 16:20:27.714438 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jun 25 16:20:27.714443 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jun 25 16:20:27.714448 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jun 25 16:20:27.714453 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jun 25 16:20:27.714458 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jun 25 16:20:27.714463 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jun 25 16:20:27.714467 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jun 25 16:20:27.714473 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jun 25 16:20:27.714477 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jun 25 16:20:27.714482 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jun 25 16:20:27.714488 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jun 25 16:20:27.714493 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jun 25 16:20:27.714498 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jun 25 16:20:27.714503 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jun 25 16:20:27.714508 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jun 25 16:20:27.714513 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jun 25 16:20:27.714518 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jun 25 16:20:27.714523 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jun 25 16:20:27.714527 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jun 25 16:20:27.714532 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jun 25 16:20:27.714538 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jun 25 16:20:27.714543 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jun 25 16:20:27.714548 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jun 25 16:20:27.714553 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jun 25 16:20:27.714558 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jun 25 16:20:27.714563 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jun 25 16:20:27.714567 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jun 25 16:20:27.714572 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jun 25 16:20:27.714577 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jun 25 16:20:27.714599 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jun 25 16:20:27.714604 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jun 25 16:20:27.714608 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jun 25 16:20:27.714613 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jun 25 16:20:27.714620 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jun 25 16:20:27.714626 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jun 25 16:20:27.714631 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jun 25 16:20:27.714636 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jun 25 16:20:27.714640 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jun 25 16:20:27.714645 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jun 25 16:20:27.714651 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jun 25 16:20:27.714655 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jun 25 16:20:27.714660 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jun 25 16:20:27.714665 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jun 25 16:20:27.714670 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jun 25 16:20:27.714674 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jun 25 16:20:27.714679 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jun 25 16:20:27.714684 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jun 25 16:20:27.714688 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jun 25 16:20:27.714693 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jun 25 16:20:27.714698 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jun 25 16:20:27.714703 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jun 25 16:20:27.714708 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jun 25 16:20:27.714713 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jun 25 16:20:27.714717 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jun 25 16:20:27.714722 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jun 25 16:20:27.714727 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jun 25 16:20:27.714731 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jun 25 16:20:27.714736 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jun 25 16:20:27.714741 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jun 25 16:20:27.714745 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jun 25 16:20:27.714751 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jun 25 16:20:27.714756 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jun 25 16:20:27.714760 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jun 25 16:20:27.714765 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jun 25 16:20:27.714770 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jun 25 16:20:27.714775 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jun 25 16:20:27.714779 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jun 25 16:20:27.714784 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jun 25 16:20:27.714789 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jun 25 16:20:27.714794 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jun 25 16:20:27.714799 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jun 25 16:20:27.714804 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jun 25 16:20:27.714809 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jun 25 16:20:27.714813 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jun 25 16:20:27.714818 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jun 25 16:20:27.714825 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jun 25 16:20:27.714831 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jun 25 16:20:27.714835 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jun 25 16:20:27.714840 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jun 25 16:20:27.714845 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jun 25 16:20:27.714850 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jun 25 16:20:27.714855 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jun 25 16:20:27.714860 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jun 25 16:20:27.714864 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jun 25 16:20:27.714869 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jun 25 16:20:27.714874 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jun 25 16:20:27.714879 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jun 25 16:20:27.714883 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jun 25 16:20:27.714888 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jun 25 16:20:27.714893 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jun 25 16:20:27.714898 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jun 25 16:20:27.714903 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jun 25 16:20:27.714908 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jun 25 16:20:27.714912 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jun 25 16:20:27.714917 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jun 25 16:20:27.714922 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jun 25 16:20:27.714926 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jun 25 16:20:27.714931 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jun 25 16:20:27.714936 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jun 25 16:20:27.714941 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jun 25 16:20:27.714945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 25 16:20:27.714951 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 25 16:20:27.714956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jun 25 16:20:27.714961 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jun 25 16:20:27.714966 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jun 25 16:20:27.714971 kernel: Zone ranges: Jun 25 16:20:27.714976 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:20:27.714981 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jun 25 16:20:27.714986 kernel: Normal empty Jun 25 16:20:27.714991 kernel: Movable zone start for each node Jun 25 16:20:27.714997 kernel: Early memory node ranges Jun 25 16:20:27.715001 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jun 25 16:20:27.715013 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jun 25 16:20:27.715019 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jun 25 16:20:27.715024 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jun 25 16:20:27.715029 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:20:27.715034 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jun 25 16:20:27.715039 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jun 25 16:20:27.715044 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 25 16:20:27.715050 kernel: system APIC only can use physical flat Jun 25 16:20:27.715055 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 25 16:20:27.715060 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 25 16:20:27.715064 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jun 25 16:20:27.715069 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jun 25 16:20:27.715074 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jun 25 16:20:27.715079 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jun 25 16:20:27.715083 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jun 25 16:20:27.715088 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jun 25 16:20:27.715093 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jun 25 16:20:27.715099 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jun 25 16:20:27.715104 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jun 25 16:20:27.715108 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jun 25 16:20:27.715113 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jun 25 16:20:27.715118 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jun 25 16:20:27.715122 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jun 25 16:20:27.715127 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jun 25 16:20:27.715132 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jun 25 16:20:27.715137 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jun 25 16:20:27.715142 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jun 25 16:20:27.715147 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jun 25 16:20:27.715152 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jun 25 16:20:27.715157 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jun 25 16:20:27.715162 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jun 25 16:20:27.715167 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jun 25 16:20:27.715171 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jun 25 16:20:27.715176 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jun 25 16:20:27.715181 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jun 25 16:20:27.715186 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jun 25 16:20:27.715191 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jun 25 16:20:27.715196 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jun 25 16:20:27.715201 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jun 25 16:20:27.715205 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jun 25 16:20:27.715210 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jun 25 16:20:27.715215 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jun 25 16:20:27.715220 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jun 25 16:20:27.715225 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jun 25 16:20:27.715229 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jun 25 16:20:27.715234 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jun 25 16:20:27.715240 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jun 25 16:20:27.715244 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jun 25 16:20:27.715249 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jun 25 16:20:27.715254 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jun 25 16:20:27.715259 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jun 25 16:20:27.715263 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jun 25 16:20:27.715268 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jun 25 16:20:27.715273 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jun 25 16:20:27.715278 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jun 25 16:20:27.715282 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jun 25 16:20:27.715288 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jun 25 16:20:27.715293 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jun 25 16:20:27.715298 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jun 25 16:20:27.715302 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jun 25 16:20:27.715307 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jun 25 16:20:27.715312 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jun 25 16:20:27.715317 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jun 25 16:20:27.715322 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jun 25 16:20:27.715326 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jun 25 16:20:27.715331 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jun 25 16:20:27.715337 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jun 25 16:20:27.715342 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jun 25 16:20:27.715346 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jun 25 16:20:27.715351 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jun 25 16:20:27.715356 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jun 25 16:20:27.715360 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jun 25 16:20:27.715365 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jun 25 16:20:27.715370 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jun 25 16:20:27.715375 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jun 25 16:20:27.715379 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jun 25 16:20:27.715385 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jun 25 16:20:27.715390 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jun 25 16:20:27.715394 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jun 25 16:20:27.715399 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jun 25 16:20:27.715404 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jun 25 16:20:27.715409 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jun 25 16:20:27.715413 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jun 25 16:20:27.715418 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jun 25 16:20:27.715423 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jun 25 16:20:27.715429 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jun 25 16:20:27.715434 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jun 25 16:20:27.715438 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jun 25 16:20:27.715443 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jun 25 16:20:27.715448 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jun 25 16:20:27.715452 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jun 25 16:20:27.715457 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jun 25 16:20:27.715462 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jun 25 16:20:27.715467 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jun 25 16:20:27.715472 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jun 25 16:20:27.715477 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jun 25 16:20:27.715482 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jun 25 16:20:27.715487 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jun 25 16:20:27.715491 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jun 25 16:20:27.715496 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jun 25 16:20:27.715501 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jun 25 16:20:27.715506 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jun 25 16:20:27.715510 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jun 25 16:20:27.715515 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jun 25 16:20:27.715520 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jun 25 16:20:27.715526 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jun 25 16:20:27.715530 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jun 25 16:20:27.715535 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jun 25 16:20:27.715540 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jun 25 16:20:27.715545 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jun 25 16:20:27.715549 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jun 25 16:20:27.715554 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jun 25 16:20:27.715559 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jun 25 16:20:27.715564 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jun 25 16:20:27.715568 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jun 25 16:20:27.715574 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jun 25 16:20:27.715579 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jun 25 16:20:27.715583 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jun 25 16:20:27.715588 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jun 25 16:20:27.715593 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jun 25 16:20:27.715598 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jun 25 16:20:27.715602 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jun 25 16:20:27.715607 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jun 25 16:20:27.715612 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jun 25 16:20:27.715617 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jun 25 16:20:27.715622 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jun 25 16:20:27.715627 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jun 25 16:20:27.715632 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jun 25 16:20:27.715637 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jun 25 16:20:27.715642 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jun 25 16:20:27.715647 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jun 25 16:20:27.715652 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jun 25 16:20:27.715656 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jun 25 16:20:27.715661 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jun 25 16:20:27.715667 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jun 25 16:20:27.715672 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jun 25 16:20:27.715676 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:20:27.715682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 25 16:20:27.715686 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:20:27.715691 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jun 25 16:20:27.715696 kernel: TSC deadline timer available Jun 25 16:20:27.715701 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jun 25 16:20:27.715706 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jun 25 16:20:27.715711 kernel: Booting paravirtualized kernel on VMware hypervisor Jun 25 16:20:27.715717 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:20:27.715722 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jun 25 16:20:27.715727 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u262144 Jun 25 16:20:27.715732 kernel: pcpu-alloc: s194792 r8192 d30488 u262144 alloc=1*2097152 Jun 25 16:20:27.715737 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jun 25 16:20:27.715742 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jun 25 16:20:27.715746 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jun 25 16:20:27.715751 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jun 25 16:20:27.715757 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jun 25 16:20:27.715761 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jun 25 16:20:27.715766 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jun 25 16:20:27.715777 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jun 25 16:20:27.715783 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jun 25 16:20:27.715788 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jun 25 16:20:27.715793 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jun 25 16:20:27.715798 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jun 25 16:20:27.715804 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jun 25 16:20:27.715810 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jun 25 16:20:27.715815 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jun 25 16:20:27.715820 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jun 25 16:20:27.715825 kernel: Fallback order for Node 0: 0 Jun 25 16:20:27.715831 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jun 25 16:20:27.715836 kernel: Policy zone: DMA32 Jun 25 16:20:27.715842 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:20:27.715847 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:20:27.715853 kernel: random: crng init done Jun 25 16:20:27.715859 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jun 25 16:20:27.715864 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jun 25 16:20:27.715869 kernel: printk: log_buf_len min size: 262144 bytes Jun 25 16:20:27.715874 kernel: printk: log_buf_len: 1048576 bytes Jun 25 16:20:27.715879 kernel: printk: early log buf free: 239640(91%) Jun 25 16:20:27.715884 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:20:27.715890 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:20:27.715895 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:20:27.715901 kernel: Memory: 1933736K/2096628K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 162632K reserved, 0K cma-reserved) Jun 25 16:20:27.715907 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jun 25 16:20:27.715912 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:20:27.715918 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:20:27.715923 kernel: Dynamic Preempt: voluntary Jun 25 16:20:27.715928 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:20:27.715935 kernel: rcu: RCU event tracing is enabled. Jun 25 16:20:27.715940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jun 25 16:20:27.715945 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:20:27.715950 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:20:27.715955 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:20:27.715961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:20:27.715966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jun 25 16:20:27.715971 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jun 25 16:20:27.715977 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jun 25 16:20:27.715983 kernel: Console: colour VGA+ 80x25 Jun 25 16:20:27.715988 kernel: printk: console [tty0] enabled Jun 25 16:20:27.715993 kernel: printk: console [ttyS0] enabled Jun 25 16:20:27.715998 kernel: ACPI: Core revision 20220331 Jun 25 16:20:27.716007 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jun 25 16:20:27.716013 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:20:27.716018 kernel: x2apic enabled Jun 25 16:20:27.716024 kernel: Switched APIC routing to physical x2apic. Jun 25 16:20:27.716037 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:20:27.716046 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 25 16:20:27.716051 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jun 25 16:20:27.716057 kernel: Disabled fast string operations Jun 25 16:20:27.716062 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:20:27.716067 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:20:27.716072 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:20:27.716078 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jun 25 16:20:27.716083 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jun 25 16:20:27.716088 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jun 25 16:20:27.716095 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:20:27.716100 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jun 25 16:20:27.716105 kernel: RETBleed: Mitigation: Enhanced IBRS Jun 25 16:20:27.716111 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:20:27.716116 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:20:27.716121 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:20:27.716126 kernel: SRBDS: Unknown: Dependent on hypervisor status Jun 25 16:20:27.716131 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:20:27.716137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:20:27.716143 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:20:27.716148 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:20:27.716153 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:20:27.716158 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 16:20:27.716164 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:20:27.716169 kernel: pid_max: default: 131072 minimum: 1024 Jun 25 16:20:27.716174 kernel: LSM: Security Framework initializing Jun 25 16:20:27.716180 kernel: SELinux: Initializing. Jun 25 16:20:27.716186 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:20:27.716192 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:20:27.716197 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jun 25 16:20:27.716202 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:20:27.716208 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:20:27.716213 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:20:27.716218 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:20:27.716224 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:20:27.716229 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:20:27.716234 kernel: Performance Events: Skylake events, core PMU driver. Jun 25 16:20:27.716239 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jun 25 16:20:27.716245 kernel: core: CPUID marked event: 'instructions' unavailable Jun 25 16:20:27.716250 kernel: core: CPUID marked event: 'bus cycles' unavailable Jun 25 16:20:27.716256 kernel: core: CPUID marked event: 'cache references' unavailable Jun 25 16:20:27.716261 kernel: core: CPUID marked event: 'cache misses' unavailable Jun 25 16:20:27.716266 kernel: core: CPUID marked event: 'branch instructions' unavailable Jun 25 16:20:27.716271 kernel: core: CPUID marked event: 'branch misses' unavailable Jun 25 16:20:27.716276 kernel: ... version: 1 Jun 25 16:20:27.716281 kernel: ... bit width: 48 Jun 25 16:20:27.716287 kernel: ... generic registers: 4 Jun 25 16:20:27.716293 kernel: ... value mask: 0000ffffffffffff Jun 25 16:20:27.716298 kernel: ... max period: 000000007fffffff Jun 25 16:20:27.716303 kernel: ... fixed-purpose events: 0 Jun 25 16:20:27.716308 kernel: ... event mask: 000000000000000f Jun 25 16:20:27.716313 kernel: signal: max sigframe size: 1776 Jun 25 16:20:27.716318 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:20:27.716324 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:20:27.716329 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:20:27.716335 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:20:27.716340 kernel: x86: Booting SMP configuration: Jun 25 16:20:27.716345 kernel: .... node #0, CPUs: #1 Jun 25 16:20:27.716351 kernel: Disabled fast string operations Jun 25 16:20:27.716356 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jun 25 16:20:27.716361 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jun 25 16:20:27.716366 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:20:27.716371 kernel: smpboot: Max logical packages: 128 Jun 25 16:20:27.716376 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jun 25 16:20:27.716382 kernel: devtmpfs: initialized Jun 25 16:20:27.716388 kernel: x86/mm: Memory block size: 128MB Jun 25 16:20:27.716393 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jun 25 16:20:27.716398 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:20:27.716403 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jun 25 16:20:27.716409 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:20:27.716414 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:20:27.716419 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:20:27.716424 kernel: audit: type=2000 audit(1719332426.063:1): state=initialized audit_enabled=0 res=1 Jun 25 16:20:27.716429 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:20:27.716435 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:20:27.716441 kernel: cpuidle: using governor menu Jun 25 16:20:27.716446 kernel: Simple Boot Flag at 0x36 set to 0x80 Jun 25 16:20:27.716451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:20:27.716456 kernel: dca service started, version 1.12.1 Jun 25 16:20:27.716461 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jun 25 16:20:27.716467 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jun 25 16:20:27.716472 kernel: PCI: Using configuration type 1 for base access Jun 25 16:20:27.716477 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:20:27.716483 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:20:27.716488 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:20:27.716493 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:20:27.716499 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:20:27.716504 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:20:27.716509 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:20:27.716514 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:20:27.716519 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:20:27.716524 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:20:27.716530 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jun 25 16:20:27.716536 kernel: ACPI: Interpreter enabled Jun 25 16:20:27.716542 kernel: ACPI: PM: (supports S0 S1 S5) Jun 25 16:20:27.716547 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:20:27.716552 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:20:27.716557 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:20:27.716562 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jun 25 16:20:27.716568 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jun 25 16:20:27.716643 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:20:27.716694 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jun 25 16:20:27.716740 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jun 25 16:20:27.716748 kernel: PCI host bridge to bus 0000:00 Jun 25 16:20:27.716794 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:20:27.716835 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jun 25 16:20:27.716876 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 25 16:20:27.716919 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:20:27.716959 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jun 25 16:20:27.716999 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jun 25 16:20:27.717084 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jun 25 16:20:27.717138 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jun 25 16:20:27.717190 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jun 25 16:20:27.717245 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jun 25 16:20:27.717292 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jun 25 16:20:27.717338 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:20:27.717388 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:20:27.717435 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:20:27.717480 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:20:27.717531 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jun 25 16:20:27.717580 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jun 25 16:20:27.717627 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jun 25 16:20:27.717679 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jun 25 16:20:27.717726 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jun 25 16:20:27.717773 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jun 25 16:20:27.717822 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jun 25 16:20:27.717875 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jun 25 16:20:27.717921 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jun 25 16:20:27.717967 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jun 25 16:20:27.718026 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jun 25 16:20:27.718075 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:20:27.718124 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jun 25 16:20:27.718175 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718224 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718276 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718326 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718376 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718423 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718472 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718520 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718569 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718614 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718666 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718712 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718761 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718811 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718859 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.718905 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.718953 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719000 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719073 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719122 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719172 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719217 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719267 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719313 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719363 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719413 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719462 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719509 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719560 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719607 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719656 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719705 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719754 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719800 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719851 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719897 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.719946 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.719996 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720064 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720112 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720161 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720207 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720256 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720302 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720355 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720423 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720471 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720516 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720564 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720611 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720661 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720741 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720790 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720838 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720888 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.720933 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.720983 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.721070 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.721120 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.721166 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.721216 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.721261 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.721309 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:20:27.721357 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.721424 kernel: pci_bus 0000:01: extended config space not accessible Jun 25 16:20:27.721469 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 25 16:20:27.721515 kernel: pci_bus 0000:02: extended config space not accessible Jun 25 16:20:27.721523 kernel: acpiphp: Slot [32] registered Jun 25 16:20:27.721528 kernel: acpiphp: Slot [33] registered Jun 25 16:20:27.721535 kernel: acpiphp: Slot [34] registered Jun 25 16:20:27.721540 kernel: acpiphp: Slot [35] registered Jun 25 16:20:27.721545 kernel: acpiphp: Slot [36] registered Jun 25 16:20:27.721550 kernel: acpiphp: Slot [37] registered Jun 25 16:20:27.721556 kernel: acpiphp: Slot [38] registered Jun 25 16:20:27.721561 kernel: acpiphp: Slot [39] registered Jun 25 16:20:27.721566 kernel: acpiphp: Slot [40] registered Jun 25 16:20:27.721571 kernel: acpiphp: Slot [41] registered Jun 25 16:20:27.721576 kernel: acpiphp: Slot [42] registered Jun 25 16:20:27.721580 kernel: acpiphp: Slot [43] registered Jun 25 16:20:27.721586 kernel: acpiphp: Slot [44] registered Jun 25 16:20:27.721591 kernel: acpiphp: Slot [45] registered Jun 25 16:20:27.721596 kernel: acpiphp: Slot [46] registered Jun 25 16:20:27.721601 kernel: acpiphp: Slot [47] registered Jun 25 16:20:27.721606 kernel: acpiphp: Slot [48] registered Jun 25 16:20:27.721611 kernel: acpiphp: Slot [49] registered Jun 25 16:20:27.721616 kernel: acpiphp: Slot [50] registered Jun 25 16:20:27.721621 kernel: acpiphp: Slot [51] registered Jun 25 16:20:27.721626 kernel: acpiphp: Slot [52] registered Jun 25 16:20:27.721632 kernel: acpiphp: Slot [53] registered Jun 25 16:20:27.721637 kernel: acpiphp: Slot [54] registered Jun 25 16:20:27.721642 kernel: acpiphp: Slot [55] registered Jun 25 16:20:27.721647 kernel: acpiphp: Slot [56] registered Jun 25 16:20:27.721652 kernel: acpiphp: Slot [57] registered Jun 25 16:20:27.721657 kernel: acpiphp: Slot [58] registered Jun 25 16:20:27.721662 kernel: acpiphp: Slot [59] registered Jun 25 16:20:27.721667 kernel: acpiphp: Slot [60] registered Jun 25 16:20:27.721672 kernel: acpiphp: Slot [61] registered Jun 25 16:20:27.721677 kernel: acpiphp: Slot [62] registered Jun 25 16:20:27.721683 kernel: acpiphp: Slot [63] registered Jun 25 16:20:27.721727 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 25 16:20:27.721771 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 25 16:20:27.721814 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 25 16:20:27.721857 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:20:27.721902 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jun 25 16:20:27.721945 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jun 25 16:20:27.721991 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jun 25 16:20:27.722085 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jun 25 16:20:27.722130 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jun 25 16:20:27.722179 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jun 25 16:20:27.722224 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jun 25 16:20:27.722269 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jun 25 16:20:27.722315 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jun 25 16:20:27.722359 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jun 25 16:20:27.722407 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 25 16:20:27.722452 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 25 16:20:27.722496 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 25 16:20:27.722539 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 25 16:20:27.722583 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 25 16:20:27.722626 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 25 16:20:27.722670 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 25 16:20:27.722717 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:20:27.722761 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 25 16:20:27.722805 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 25 16:20:27.722849 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 25 16:20:27.722893 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:20:27.722938 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 25 16:20:27.722982 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 25 16:20:27.723033 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:20:27.723080 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 25 16:20:27.723125 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 25 16:20:27.723170 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:20:27.723215 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 25 16:20:27.723261 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 25 16:20:27.723305 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:20:27.723350 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 25 16:20:27.723415 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 25 16:20:27.723495 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:20:27.723562 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 25 16:20:27.723607 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 25 16:20:27.723652 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:20:27.723706 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jun 25 16:20:27.723753 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jun 25 16:20:27.723800 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jun 25 16:20:27.723850 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jun 25 16:20:27.723897 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jun 25 16:20:27.723944 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jun 25 16:20:27.723990 kernel: pci 0000:0b:00.0: supports D1 D2 Jun 25 16:20:27.724053 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 16:20:27.724101 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 25 16:20:27.724147 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 25 16:20:27.724192 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 25 16:20:27.724236 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 25 16:20:27.724281 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 25 16:20:27.724325 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 25 16:20:27.724377 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 25 16:20:27.724428 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:20:27.724474 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 25 16:20:27.724520 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 25 16:20:27.724564 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 25 16:20:27.724609 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:20:27.724655 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 25 16:20:27.724699 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 25 16:20:27.724744 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:20:27.724792 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 25 16:20:27.724837 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 25 16:20:27.724882 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:20:27.724927 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 25 16:20:27.724972 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 25 16:20:27.725039 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:20:27.725085 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 25 16:20:27.725130 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 25 16:20:27.725177 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:20:27.725222 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 25 16:20:27.725266 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 25 16:20:27.725310 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:20:27.725355 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 25 16:20:27.725400 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 25 16:20:27.725445 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 25 16:20:27.725489 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:20:27.725536 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 25 16:20:27.725581 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 25 16:20:27.725625 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 25 16:20:27.725668 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:20:27.725714 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 25 16:20:27.725759 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 25 16:20:27.725804 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 25 16:20:27.725851 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:20:27.725896 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 25 16:20:27.725941 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 25 16:20:27.725985 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:20:27.726037 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 25 16:20:27.726081 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 25 16:20:27.726126 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:20:27.726171 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 25 16:20:27.726219 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 25 16:20:27.726263 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:20:27.726310 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 25 16:20:27.726354 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 25 16:20:27.726400 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:20:27.726446 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 25 16:20:27.726492 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 25 16:20:27.726536 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:20:27.726584 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 25 16:20:27.726648 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 25 16:20:27.726693 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 25 16:20:27.726754 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:20:27.726800 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 25 16:20:27.726847 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 25 16:20:27.726894 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 25 16:20:27.726939 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:20:27.726987 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 25 16:20:27.727039 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 25 16:20:27.727085 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:20:27.727129 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 25 16:20:27.727174 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 25 16:20:27.727219 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:20:27.727264 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 25 16:20:27.727308 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 25 16:20:27.727356 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:20:27.727431 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 25 16:20:27.727477 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 25 16:20:27.727522 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:20:27.727569 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 25 16:20:27.727615 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 25 16:20:27.727661 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:20:27.727707 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 25 16:20:27.727756 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 25 16:20:27.727816 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:20:27.727824 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jun 25 16:20:27.727830 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jun 25 16:20:27.727836 kernel: ACPI: PCI: Interrupt link LNKB disabled Jun 25 16:20:27.727841 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:20:27.727847 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jun 25 16:20:27.727852 kernel: iommu: Default domain type: Translated Jun 25 16:20:27.727857 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:20:27.727865 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:20:27.727870 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:20:27.727876 kernel: PTP clock support registered Jun 25 16:20:27.727881 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:20:27.727886 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:20:27.727892 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jun 25 16:20:27.727897 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jun 25 16:20:27.727943 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jun 25 16:20:27.728074 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jun 25 16:20:27.728125 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:20:27.728133 kernel: vgaarb: loaded Jun 25 16:20:27.728139 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jun 25 16:20:27.728144 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jun 25 16:20:27.728150 kernel: clocksource: Switched to clocksource tsc-early Jun 25 16:20:27.728155 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:20:27.728160 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:20:27.728166 kernel: pnp: PnP ACPI init Jun 25 16:20:27.728216 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jun 25 16:20:27.728258 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jun 25 16:20:27.728299 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jun 25 16:20:27.728346 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jun 25 16:20:27.728391 kernel: pnp 00:06: [dma 2] Jun 25 16:20:27.728435 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jun 25 16:20:27.728477 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jun 25 16:20:27.728520 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jun 25 16:20:27.728527 kernel: pnp: PnP ACPI: found 8 devices Jun 25 16:20:27.728533 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:20:27.728539 kernel: NET: Registered PF_INET protocol family Jun 25 16:20:27.728544 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:20:27.728550 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:20:27.728555 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:20:27.728560 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:20:27.728567 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:20:27.728573 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:20:27.728578 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:20:27.728586 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:20:27.728609 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:20:27.728630 kernel: NET: Registered PF_XDP protocol family Jun 25 16:20:27.728718 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 25 16:20:27.728795 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 25 16:20:27.728848 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 25 16:20:27.728895 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 25 16:20:27.728941 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 25 16:20:27.728988 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jun 25 16:20:27.729092 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jun 25 16:20:27.729138 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jun 25 16:20:27.729187 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jun 25 16:20:27.729232 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jun 25 16:20:27.729277 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jun 25 16:20:27.729323 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jun 25 16:20:27.729369 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jun 25 16:20:27.729414 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jun 25 16:20:27.729462 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jun 25 16:20:27.729511 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jun 25 16:20:27.729589 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jun 25 16:20:27.729641 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jun 25 16:20:27.729688 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jun 25 16:20:27.729737 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jun 25 16:20:27.729784 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jun 25 16:20:27.729829 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jun 25 16:20:27.729880 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jun 25 16:20:27.729926 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:20:27.729972 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:20:27.730029 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730079 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730126 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730171 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730217 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730261 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730308 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730353 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730410 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730460 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730505 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730551 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730596 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730641 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730686 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730732 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730777 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730824 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730871 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.730916 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.730961 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731019 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731072 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731119 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731164 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731213 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731259 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731304 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731349 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731405 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731452 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731497 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731543 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731591 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731637 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731682 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731728 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731774 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731819 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731865 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.731912 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.731960 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732016 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732063 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732107 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732153 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732201 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732247 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732292 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732337 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732384 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732430 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732475 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732521 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732566 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732611 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732656 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732701 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732746 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732792 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732839 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732889 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.732934 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.732979 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733039 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733086 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733131 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733177 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733222 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733270 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733315 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733361 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733407 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733452 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733498 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733544 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733590 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733636 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733682 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733730 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733776 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733821 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733866 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jun 25 16:20:27.733911 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:20:27.733956 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 25 16:20:27.734002 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jun 25 16:20:27.734112 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 25 16:20:27.734159 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 25 16:20:27.734207 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:20:27.734256 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jun 25 16:20:27.734561 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 25 16:20:27.734611 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 25 16:20:27.734657 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 25 16:20:27.734704 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:20:27.734750 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 25 16:20:27.734796 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 25 16:20:27.734844 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 25 16:20:27.734890 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:20:27.734935 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 25 16:20:27.734980 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 25 16:20:27.735063 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 25 16:20:27.735110 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:20:27.735155 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 25 16:20:27.735201 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 25 16:20:27.735246 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:20:27.735291 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 25 16:20:27.735338 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 25 16:20:27.735384 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:20:27.735431 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 25 16:20:27.735477 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 25 16:20:27.735522 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:20:27.735570 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 25 16:20:27.735616 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 25 16:20:27.735662 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:20:27.735706 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 25 16:20:27.735752 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 25 16:20:27.735798 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:20:27.735846 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jun 25 16:20:27.735898 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 25 16:20:27.735944 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 25 16:20:27.735990 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 25 16:20:27.736052 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:20:27.736099 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 25 16:20:27.736145 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 25 16:20:27.736192 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 25 16:20:27.736237 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:20:27.736283 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 25 16:20:27.736328 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 25 16:20:27.736375 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 25 16:20:27.736420 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:20:27.736468 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 25 16:20:27.736513 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 25 16:20:27.736558 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:20:27.736604 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 25 16:20:27.736650 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 25 16:20:27.736696 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:20:27.736741 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 25 16:20:27.736787 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 25 16:20:27.736832 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:20:27.736879 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 25 16:20:27.736925 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 25 16:20:27.736969 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:20:27.737051 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 25 16:20:27.737101 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 25 16:20:27.737153 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:20:27.737237 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 25 16:20:27.737285 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 25 16:20:27.737332 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 25 16:20:27.737381 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:20:27.737430 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 25 16:20:27.737476 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 25 16:20:27.737521 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 25 16:20:27.737565 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:20:27.737611 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 25 16:20:27.737656 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 25 16:20:27.737701 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 25 16:20:27.737747 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:20:27.737792 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 25 16:20:27.737841 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 25 16:20:27.737886 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:20:27.737932 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 25 16:20:27.737977 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 25 16:20:27.738044 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:20:27.738092 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 25 16:20:27.738138 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 25 16:20:27.738184 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:20:27.738230 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 25 16:20:27.738276 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 25 16:20:27.738325 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:20:27.738370 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 25 16:20:27.738416 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 25 16:20:27.738461 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:20:27.738508 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 25 16:20:27.738553 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 25 16:20:27.738599 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 25 16:20:27.738644 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:20:27.738690 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 25 16:20:27.738737 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 25 16:20:27.738783 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 25 16:20:27.738828 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:20:27.738877 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 25 16:20:27.738924 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 25 16:20:27.738969 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:20:27.739439 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 25 16:20:27.741735 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 25 16:20:27.741798 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:20:27.741850 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 25 16:20:27.741906 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 25 16:20:27.741954 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:20:27.742001 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 25 16:20:27.742087 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 25 16:20:27.742134 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:20:27.742180 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 25 16:20:27.742226 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 25 16:20:27.742272 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:20:27.742319 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 25 16:20:27.742367 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 25 16:20:27.742413 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:20:27.742457 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jun 25 16:20:27.742498 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 25 16:20:27.742539 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 25 16:20:27.742580 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jun 25 16:20:27.742620 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jun 25 16:20:27.742664 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jun 25 16:20:27.742710 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jun 25 16:20:27.742760 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:20:27.742804 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jun 25 16:20:27.742846 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 25 16:20:27.742889 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 25 16:20:27.742933 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jun 25 16:20:27.742981 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jun 25 16:20:27.743055 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jun 25 16:20:27.743109 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jun 25 16:20:27.743160 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:20:27.743210 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jun 25 16:20:27.743267 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jun 25 16:20:27.743314 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:20:27.743364 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jun 25 16:20:27.743410 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jun 25 16:20:27.743455 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:20:27.743502 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jun 25 16:20:27.743544 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:20:27.743591 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jun 25 16:20:27.743634 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:20:27.743681 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jun 25 16:20:27.743727 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:20:27.743783 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jun 25 16:20:27.743826 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:20:27.743872 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jun 25 16:20:27.743915 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:20:27.743964 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jun 25 16:20:27.744062 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jun 25 16:20:27.744110 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:20:27.744159 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jun 25 16:20:27.744202 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jun 25 16:20:27.744244 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:20:27.744290 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jun 25 16:20:27.744335 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jun 25 16:20:27.744381 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:20:27.744428 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jun 25 16:20:27.744470 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:20:27.744516 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jun 25 16:20:27.744559 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:20:27.744606 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jun 25 16:20:27.744653 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:20:27.744699 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jun 25 16:20:27.744742 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:20:27.744790 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jun 25 16:20:27.744834 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:20:27.744880 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jun 25 16:20:27.744926 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jun 25 16:20:27.744969 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:20:27.745022 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jun 25 16:20:27.745066 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jun 25 16:20:27.745108 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:20:27.745154 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jun 25 16:20:27.745200 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jun 25 16:20:27.745243 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:20:27.745290 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jun 25 16:20:27.745332 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:20:27.745378 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jun 25 16:20:27.745421 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:20:27.745483 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jun 25 16:20:27.745531 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:20:27.745578 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jun 25 16:20:27.745622 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:20:27.745671 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jun 25 16:20:27.745714 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:20:27.745763 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jun 25 16:20:27.745807 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jun 25 16:20:27.745850 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:20:27.745896 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jun 25 16:20:27.745939 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jun 25 16:20:27.745985 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:20:27.746049 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jun 25 16:20:27.746097 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:20:27.746144 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jun 25 16:20:27.746188 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:20:27.746237 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jun 25 16:20:27.746280 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:20:27.746327 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jun 25 16:20:27.746374 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:20:27.746420 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jun 25 16:20:27.746464 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:20:27.746510 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jun 25 16:20:27.746553 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:20:27.746611 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:20:27.746621 kernel: PCI: CLS 32 bytes, default 64 Jun 25 16:20:27.746630 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:20:27.746636 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 25 16:20:27.746642 kernel: clocksource: Switched to clocksource tsc Jun 25 16:20:27.746648 kernel: Initialise system trusted keyrings Jun 25 16:20:27.746653 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:20:27.746659 kernel: Key type asymmetric registered Jun 25 16:20:27.746665 kernel: Asymmetric key parser 'x509' registered Jun 25 16:20:27.746670 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:20:27.746676 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:20:27.746683 kernel: io scheduler mq-deadline registered Jun 25 16:20:27.746689 kernel: io scheduler kyber registered Jun 25 16:20:27.746694 kernel: io scheduler bfq registered Jun 25 16:20:27.746742 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jun 25 16:20:27.746803 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.746853 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jun 25 16:20:27.746900 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.746947 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jun 25 16:20:27.746998 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.747085 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jun 25 16:20:27.747133 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.747180 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jun 25 16:20:27.747227 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.747552 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jun 25 16:20:27.747605 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.747652 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jun 25 16:20:27.747699 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.747747 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jun 25 16:20:27.747794 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.747844 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jun 25 16:20:27.747890 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.747937 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jun 25 16:20:27.747983 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748037 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jun 25 16:20:27.748085 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748133 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jun 25 16:20:27.748182 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748230 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jun 25 16:20:27.748277 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748325 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jun 25 16:20:27.748371 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748421 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jun 25 16:20:27.748477 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748532 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jun 25 16:20:27.748580 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748627 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jun 25 16:20:27.748674 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748724 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jun 25 16:20:27.748771 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748818 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jun 25 16:20:27.748866 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.748913 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jun 25 16:20:27.748960 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749026 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jun 25 16:20:27.749079 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749127 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jun 25 16:20:27.749173 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749225 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jun 25 16:20:27.749286 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749339 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jun 25 16:20:27.749390 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749442 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jun 25 16:20:27.749491 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749550 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jun 25 16:20:27.749602 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749664 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jun 25 16:20:27.749718 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749775 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jun 25 16:20:27.749824 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749874 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jun 25 16:20:27.749924 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.749975 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jun 25 16:20:27.750029 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.750078 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jun 25 16:20:27.750125 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.750173 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jun 25 16:20:27.750223 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:20:27.750231 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:20:27.750237 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:20:27.750244 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:20:27.750250 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jun 25 16:20:27.750256 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:20:27.750262 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:20:27.750311 kernel: rtc_cmos 00:01: registered as rtc0 Jun 25 16:20:27.750354 kernel: rtc_cmos 00:01: setting system clock to 2024-06-25T16:20:27 UTC (1719332427) Jun 25 16:20:27.750363 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:20:27.750404 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jun 25 16:20:27.750411 kernel: fail to initialize ptp_kvm Jun 25 16:20:27.750417 kernel: intel_pstate: CPU model not supported Jun 25 16:20:27.750423 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:20:27.750429 kernel: Segment Routing with IPv6 Jun 25 16:20:27.750436 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:20:27.750442 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:20:27.750448 kernel: Key type dns_resolver registered Jun 25 16:20:27.750454 kernel: IPI shorthand broadcast: enabled Jun 25 16:20:27.750460 kernel: sched_clock: Marking stable (912043577, 232534335)->(1210201550, -65623638) Jun 25 16:20:27.750465 kernel: registered taskstats version 1 Jun 25 16:20:27.750471 kernel: Loading compiled-in X.509 certificates Jun 25 16:20:27.750476 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:20:27.750482 kernel: Key type .fscrypt registered Jun 25 16:20:27.750488 kernel: Key type fscrypt-provisioning registered Jun 25 16:20:27.750494 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:20:27.750500 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:20:27.750505 kernel: ima: No architecture policies found Jun 25 16:20:27.750511 kernel: clk: Disabling unused clocks Jun 25 16:20:27.750517 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:20:27.750523 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:20:27.750529 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:20:27.750534 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:20:27.750541 kernel: Run /init as init process Jun 25 16:20:27.750547 kernel: with arguments: Jun 25 16:20:27.750553 kernel: /init Jun 25 16:20:27.750558 kernel: with environment: Jun 25 16:20:27.750564 kernel: HOME=/ Jun 25 16:20:27.750569 kernel: TERM=linux Jun 25 16:20:27.750575 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:20:27.750582 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:20:27.750590 systemd[1]: Detected virtualization vmware. Jun 25 16:20:27.750596 systemd[1]: Detected architecture x86-64. Jun 25 16:20:27.750602 systemd[1]: Running in initrd. Jun 25 16:20:27.750607 systemd[1]: No hostname configured, using default hostname. Jun 25 16:20:27.750613 systemd[1]: Hostname set to . Jun 25 16:20:27.750619 systemd[1]: Initializing machine ID from random generator. Jun 25 16:20:27.750625 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:20:27.750630 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:20:27.750637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:20:27.750643 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:20:27.750649 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:20:27.750655 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:20:27.750660 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:20:27.750667 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:20:27.750672 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:20:27.750679 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:20:27.750685 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:20:27.750691 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:20:27.750697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:20:27.750703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:20:27.750709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:20:27.750715 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:20:27.750721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:20:27.750727 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:20:27.750734 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:20:27.750739 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:20:27.750745 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:20:27.750751 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:20:27.750757 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:20:27.750763 kernel: audit: type=1130 audit(1719332427.712:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.750769 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:20:27.750775 kernel: audit: type=1130 audit(1719332427.724:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.750782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:20:27.750788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:20:27.750794 kernel: audit: type=1130 audit(1719332427.742:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.750800 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:20:27.750805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:20:27.750811 kernel: audit: type=1130 audit(1719332427.745:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.750820 systemd-journald[212]: Journal started Jun 25 16:20:27.750872 systemd-journald[212]: Runtime Journal (/run/log/journal/aab7b042d100478f83f9e0bd28d10c77) is 4.8M, max 38.7M, 33.9M free. Jun 25 16:20:27.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.728779 systemd-modules-load[213]: Inserted module 'overlay' Jun 25 16:20:27.754279 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:20:27.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.759631 kernel: audit: type=1130 audit(1719332427.754:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.761184 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:20:27.762312 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:20:27.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.765445 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:20:27.765842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:20:27.766043 kernel: audit: type=1130 audit(1719332427.761:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.766709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:20:27.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.770648 kernel: audit: type=1130 audit(1719332427.765:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.770665 kernel: audit: type=1334 audit(1719332427.765:9): prog-id=6 op=LOAD Jun 25 16:20:27.765000 audit: BPF prog-id=6 op=LOAD Jun 25 16:20:27.775030 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:20:27.776937 systemd-modules-load[213]: Inserted module 'br_netfilter' Jun 25 16:20:27.777071 kernel: Bridge firewalling registered Jun 25 16:20:27.777709 dracut-cmdline[230]: dracut-dracut-053 Jun 25 16:20:27.783082 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:20:27.792021 kernel: SCSI subsystem initialized Jun 25 16:20:27.793140 systemd-resolved[231]: Positive Trust Anchors: Jun 25 16:20:27.793150 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:20:27.793177 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:20:27.796892 systemd-resolved[231]: Defaulting to hostname 'linux'. Jun 25 16:20:27.798088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:20:27.798228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:20:27.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.801244 kernel: audit: type=1130 audit(1719332427.797:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.808019 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:20:27.808050 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:20:27.809218 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:20:27.811465 systemd-modules-load[213]: Inserted module 'dm_multipath' Jun 25 16:20:27.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.812125 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:20:27.816163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:20:27.820241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:20:27.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.836019 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:20:27.843019 kernel: iscsi: registered transport (tcp) Jun 25 16:20:27.858017 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:20:27.858036 kernel: QLogic iSCSI HBA Driver Jun 25 16:20:27.876108 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:20:27.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:27.879104 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:20:27.927039 kernel: raid6: avx2x4 gen() 38390 MB/s Jun 25 16:20:27.944043 kernel: raid6: avx2x2 gen() 47707 MB/s Jun 25 16:20:27.961250 kernel: raid6: avx2x1 gen() 44200 MB/s Jun 25 16:20:27.961291 kernel: raid6: using algorithm avx2x2 gen() 47707 MB/s Jun 25 16:20:27.979234 kernel: raid6: .... xor() 30305 MB/s, rmw enabled Jun 25 16:20:27.979273 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:20:27.982022 kernel: xor: automatically using best checksumming function avx Jun 25 16:20:28.080033 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:20:28.085968 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:20:28.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:28.085000 audit: BPF prog-id=7 op=LOAD Jun 25 16:20:28.085000 audit: BPF prog-id=8 op=LOAD Jun 25 16:20:28.089136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:20:28.097315 systemd-udevd[412]: Using default interface naming scheme 'v252'. Jun 25 16:20:28.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:28.101167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:20:28.101891 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:20:28.111286 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jun 25 16:20:28.127489 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:20:28.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:28.136104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:20:28.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:28.203468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:20:28.278024 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jun 25 16:20:28.282017 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jun 25 16:20:28.284014 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jun 25 16:20:28.301477 kernel: vmw_pvscsi: using 64bit dma Jun 25 16:20:28.301492 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:20:28.301508 kernel: vmw_pvscsi: max_id: 16 Jun 25 16:20:28.301517 kernel: vmw_pvscsi: setting ring_pages to 8 Jun 25 16:20:28.301525 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:20:28.301536 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jun 25 16:20:28.301622 kernel: AES CTR mode by8 optimization enabled Jun 25 16:20:28.301634 kernel: vmw_pvscsi: enabling reqCallThreshold Jun 25 16:20:28.301644 kernel: vmw_pvscsi: driver-based request coalescing enabled Jun 25 16:20:28.301654 kernel: vmw_pvscsi: using MSI-X Jun 25 16:20:28.301661 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jun 25 16:20:28.301741 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jun 25 16:20:28.301813 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jun 25 16:20:28.313024 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jun 25 16:20:28.318017 kernel: libata version 3.00 loaded. Jun 25 16:20:28.324588 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jun 25 16:20:28.333373 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 16:20:28.333446 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jun 25 16:20:28.333516 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jun 25 16:20:28.333578 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jun 25 16:20:28.333648 kernel: ata_piix 0000:00:07.1: version 2.13 Jun 25 16:20:28.333709 kernel: scsi host1: ata_piix Jun 25 16:20:28.333776 kernel: scsi host2: ata_piix Jun 25 16:20:28.333847 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jun 25 16:20:28.333856 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jun 25 16:20:28.333863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:20:28.333869 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 16:20:28.504026 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jun 25 16:20:28.508028 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jun 25 16:20:28.544587 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jun 25 16:20:28.569510 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:20:28.569530 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 25 16:20:28.577032 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (471) Jun 25 16:20:28.577947 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jun 25 16:20:28.581028 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (466) Jun 25 16:20:28.586251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jun 25 16:20:28.589749 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jun 25 16:20:28.589983 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jun 25 16:20:28.592095 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 25 16:20:28.594103 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:20:28.652029 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:20:28.658025 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:20:29.673597 disk-uuid[556]: The operation has completed successfully. Jun 25 16:20:29.674020 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:20:29.735618 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:20:29.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:29.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:29.735675 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:20:29.741212 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:20:29.744275 sh[573]: Success Jun 25 16:20:29.754026 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:20:29.817881 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:20:29.818738 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:20:29.819110 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:20:29.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:29.835909 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:20:29.835950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:20:29.835965 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:20:29.835973 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:20:29.836739 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:20:29.846077 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 16:20:29.850158 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:20:29.857184 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jun 25 16:20:29.857788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:20:30.031552 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:20:30.031593 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:20:30.031605 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:20:30.078036 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:20:30.087958 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:20:30.090066 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:20:30.097156 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:20:30.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.100230 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:20:30.242641 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 25 16:20:30.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.248216 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:20:30.292777 ignition[632]: Ignition 2.15.0 Jun 25 16:20:30.293110 ignition[632]: Stage: fetch-offline Jun 25 16:20:30.293234 ignition[632]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:20:30.293354 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:20:30.293530 ignition[632]: parsed url from cmdline: "" Jun 25 16:20:30.293561 ignition[632]: no config URL provided Jun 25 16:20:30.293662 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:20:30.293786 ignition[632]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:20:30.294259 ignition[632]: config successfully fetched Jun 25 16:20:30.294308 ignition[632]: parsing config with SHA512: 2fd5520a9c7d4ad5ef32188c1c33f4dc1abf4a336c4535a4db2068c761ad3baf6442e78171207b3625b872683382b1699d4485a3a6b11880c43c519748f16a94 Jun 25 16:20:30.297938 unknown[632]: fetched base config from "system" Jun 25 16:20:30.298084 unknown[632]: fetched user config from "vmware" Jun 25 16:20:30.298906 ignition[632]: fetch-offline: fetch-offline passed Jun 25 16:20:30.299073 ignition[632]: Ignition finished successfully Jun 25 16:20:30.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.299707 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:20:30.314710 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:20:30.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.314000 audit: BPF prog-id=9 op=LOAD Jun 25 16:20:30.318101 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:20:30.330358 systemd-networkd[761]: lo: Link UP Jun 25 16:20:30.330365 systemd-networkd[761]: lo: Gained carrier Jun 25 16:20:30.330621 systemd-networkd[761]: Enumeration completed Jun 25 16:20:30.330818 systemd-networkd[761]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jun 25 16:20:30.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.331018 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:20:30.331166 systemd[1]: Reached target network.target - Network. Jun 25 16:20:30.331249 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 16:20:30.333989 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 25 16:20:30.334088 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 25 16:20:30.333504 systemd-networkd[761]: ens192: Link UP Jun 25 16:20:30.333506 systemd-networkd[761]: ens192: Gained carrier Jun 25 16:20:30.337793 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:20:30.338346 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:20:30.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.341809 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:20:30.342471 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:20:30.344223 ignition[763]: Ignition 2.15.0 Jun 25 16:20:30.344450 iscsid[773]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:20:30.344450 iscsid[773]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:20:30.344450 iscsid[773]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:20:30.344450 iscsid[773]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:20:30.344450 iscsid[773]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:20:30.344450 iscsid[773]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:20:30.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.345683 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:20:30.346896 ignition[763]: Stage: kargs Jun 25 16:20:30.347024 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:20:30.347033 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:20:30.348654 ignition[763]: kargs: kargs passed Jun 25 16:20:30.348695 ignition[763]: Ignition finished successfully Jun 25 16:20:30.350110 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:20:30.350466 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:20:30.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.351344 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:20:30.358731 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:20:30.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.358920 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:20:30.359039 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:20:30.359237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:20:30.360159 ignition[775]: Ignition 2.15.0 Jun 25 16:20:30.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.360165 ignition[775]: Stage: disks Jun 25 16:20:30.363091 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:20:30.360250 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:20:30.363306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:20:30.360258 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:20:30.363474 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:20:30.360851 ignition[775]: disks: disks passed Jun 25 16:20:30.363582 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:20:30.360875 ignition[775]: Ignition finished successfully Jun 25 16:20:30.363690 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:20:30.363789 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:20:30.363886 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:20:30.368985 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:20:30.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.369614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:20:30.381577 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 16:20:30.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.383076 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:20:30.387087 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:20:30.450025 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:20:30.450301 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:20:30.450463 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:20:30.457063 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:20:30.457782 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:20:30.458279 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:20:30.458485 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:20:30.458710 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:20:30.460782 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:20:30.461433 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:20:30.465018 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (803) Jun 25 16:20:30.465036 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:20:30.467055 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:20:30.467072 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:20:30.474590 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:20:30.474438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:20:30.489943 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:20:30.493841 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:20:30.496382 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:20:30.498645 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:20:30.549697 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:20:30.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.552086 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:20:30.552582 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:20:30.557024 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:20:30.565613 ignition[914]: INFO : Ignition 2.15.0 Jun 25 16:20:30.565613 ignition[914]: INFO : Stage: mount Jun 25 16:20:30.565960 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:20:30.565960 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:20:30.566239 ignition[914]: INFO : mount: mount passed Jun 25 16:20:30.566346 ignition[914]: INFO : Ignition finished successfully Jun 25 16:20:30.566911 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:20:30.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.570085 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:20:30.577691 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:20:30.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:30.832819 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:20:30.839162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:20:30.930030 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (925) Jun 25 16:20:30.942754 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:20:30.942779 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:20:30.942787 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:20:30.980025 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:20:30.985545 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:20:30.997255 ignition[943]: INFO : Ignition 2.15.0 Jun 25 16:20:30.997255 ignition[943]: INFO : Stage: files Jun 25 16:20:30.997560 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:20:30.997560 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:20:30.997868 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:20:31.008547 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:20:31.008547 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:20:31.045995 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:20:31.046170 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:20:31.046297 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:20:31.046234 unknown[943]: wrote ssh authorized keys file for user: core Jun 25 16:20:31.068232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:20:31.068607 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:20:31.109133 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:20:31.175251 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:20:31.175528 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:20:31.175528 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:20:31.175528 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:20:31.175528 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:20:31.175528 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:20:31.176415 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 16:20:31.515080 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:20:31.787975 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:20:31.787975 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 25 16:20:31.788374 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 25 16:20:31.788374 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 16:20:31.798307 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:20:31.798470 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:20:31.798470 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 16:20:31.798470 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 16:20:31.798470 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:20:31.798998 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:20:31.798998 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 16:20:31.798998 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 16:20:31.798998 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:20:31.947079 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:20:31.947279 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 16:20:31.947279 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:20:31.947279 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:20:31.947279 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:20:31.947798 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:20:31.947798 ignition[943]: INFO : files: files passed Jun 25 16:20:31.947798 ignition[943]: INFO : Ignition finished successfully Jun 25 16:20:31.948553 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:20:31.954908 kernel: kauditd_printk_skb: 28 callbacks suppressed Jun 25 16:20:31.954923 kernel: audit: type=1130 audit(1719332431.947:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.953143 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:20:31.953760 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:20:31.956175 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:20:31.956223 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:20:31.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.959011 kernel: audit: type=1130 audit(1719332431.955:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.962038 kernel: audit: type=1131 audit(1719332431.955:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.963501 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:20:31.963501 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:20:31.964703 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:20:31.965573 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:20:31.968317 kernel: audit: type=1130 audit(1719332431.964:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.965740 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:20:31.970138 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:20:31.977557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:20:31.977743 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:20:31.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.978097 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:20:31.982810 kernel: audit: type=1130 audit(1719332431.976:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.982823 kernel: audit: type=1131 audit(1719332431.977:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.982871 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:20:31.983017 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:20:31.983496 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:20:31.989989 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:20:31.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.993068 kernel: audit: type=1130 audit(1719332431.989:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:31.994232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:20:31.998950 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:20:31.999271 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:20:31.999580 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:20:31.999976 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:20:32.000184 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:20:31.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.000551 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:20:32.003017 kernel: audit: type=1131 audit(1719332431.999:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.003052 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:20:32.003223 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:20:32.003454 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:20:32.003628 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:20:32.003801 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:20:32.004011 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:20:32.004193 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:20:32.004367 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:20:32.004539 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:20:32.004712 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:20:32.004857 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:20:32.004930 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:20:32.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.005219 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:20:32.007694 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:20:32.007888 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:20:32.008018 kernel: audit: type=1131 audit(1719332432.004:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.008246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:20:32.008332 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:20:32.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.010922 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:20:32.011041 kernel: audit: type=1131 audit(1719332432.007:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.011204 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:20:32.016025 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:20:32.016360 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:20:32.016632 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:20:32.016898 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:20:32.017124 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:20:32.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.017500 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:20:32.017698 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:20:32.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.025233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:20:32.025561 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:20:32.025764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:20:32.025969 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:20:32.026748 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:20:32.026981 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:20:32.027209 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:20:32.027540 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:20:32.027743 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:20:32.029098 iscsid[773]: iscsid shutting down. Jun 25 16:20:32.029899 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:20:32.030096 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:20:32.030445 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:20:32.030621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:20:32.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.032721 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:20:32.032868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:20:32.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.033310 systemd-networkd[761]: ens192: Gained IPv6LL Jun 25 16:20:32.034155 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:20:32.034919 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:20:32.035102 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:20:32.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.035423 systemd[1]: Stopped target network.target - Network. Jun 25 16:20:32.035533 ignition[988]: INFO : Ignition 2.15.0 Jun 25 16:20:32.035533 ignition[988]: INFO : Stage: umount Jun 25 16:20:32.035782 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:20:32.035782 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:20:32.036137 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:20:32.036306 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:20:32.036729 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:20:32.036864 ignition[988]: INFO : umount: umount passed Jun 25 16:20:32.036864 ignition[988]: INFO : Ignition finished successfully Jun 25 16:20:32.037244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:20:32.037666 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:20:32.037835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:20:32.038141 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:20:32.038296 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:20:32.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.038598 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:20:32.038739 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:20:32.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.039000 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:20:32.039147 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:20:32.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.040602 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:20:32.040951 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:20:32.041132 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:20:32.041571 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:20:32.041716 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:20:32.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.043000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:20:32.044474 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:20:32.044700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:20:32.044855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:20:32.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.045186 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jun 25 16:20:32.045337 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 25 16:20:32.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.045633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:20:32.045777 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:20:32.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.046102 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:20:32.046265 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:20:32.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.046592 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:20:32.048652 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:20:32.049246 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:20:32.049459 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:20:32.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.050287 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:20:32.050493 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:20:32.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.051397 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:20:32.051804 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:20:32.051981 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:20:32.051000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:20:32.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.052529 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:20:32.052724 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:20:32.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.053105 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:20:32.053260 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:20:32.053504 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:20:32.053650 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:20:32.053881 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:20:32.054106 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:20:32.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.054385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:20:32.054526 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:20:32.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.054789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:20:32.054933 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:20:32.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.060191 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:20:32.060435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:20:32.060591 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:20:32.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.062892 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:20:32.063086 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:20:32.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.126941 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:20:32.127216 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:20:32.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.127635 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:20:32.127951 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:20:32.128158 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:20:32.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.131212 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:20:32.136096 systemd[1]: Switching root. Jun 25 16:20:32.155015 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Jun 25 16:20:32.155049 systemd-journald[212]: Journal stopped Jun 25 16:20:33.047535 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:20:33.047561 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:20:33.047570 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:20:33.047575 kernel: SELinux: policy capability open_perms=1 Jun 25 16:20:33.047581 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:20:33.047586 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:20:33.047593 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:20:33.047599 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:20:33.047604 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:20:33.047610 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:20:33.047616 systemd[1]: Successfully loaded SELinux policy in 97.809ms. Jun 25 16:20:33.047624 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.027ms. Jun 25 16:20:33.047631 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:20:33.047638 systemd[1]: Detected virtualization vmware. Jun 25 16:20:33.047645 systemd[1]: Detected architecture x86-64. Jun 25 16:20:33.047651 systemd[1]: Detected first boot. Jun 25 16:20:33.047658 systemd[1]: Initializing machine ID from random generator. Jun 25 16:20:33.047664 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:20:33.048766 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:20:33.048777 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jun 25 16:20:33.048784 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:20:33.048790 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:20:33.048797 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:20:33.048803 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:20:33.048812 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:20:33.048819 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:20:33.048825 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:20:33.048831 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:20:33.048838 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:20:33.048845 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:20:33.048852 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:20:33.048858 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:20:33.048866 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:20:33.048872 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:20:33.048879 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:20:33.048886 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:20:33.048892 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:20:33.048898 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:20:33.048904 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:20:33.048911 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:20:33.048919 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:20:33.048927 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:20:33.048934 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:20:33.048940 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:20:33.048946 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:20:33.048953 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:20:33.048959 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:20:33.048966 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:20:33.048973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:20:33.048980 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:20:33.048988 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:20:33.048994 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:20:33.049001 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:20:33.049034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:33.049043 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:20:33.049052 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:20:33.049379 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:20:33.049388 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:20:33.049395 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jun 25 16:20:33.049402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:20:33.049409 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:20:33.049417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:20:33.049424 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:20:33.049431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:20:33.049438 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:20:33.049445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:20:33.049452 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:20:33.049459 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:20:33.049465 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:20:33.049472 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:20:33.049480 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:20:33.049487 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:20:33.049493 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:20:33.049500 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:20:33.049507 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:20:33.049514 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:20:33.049521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:20:33.049528 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:20:33.049536 systemd[1]: Stopped verity-setup.service. Jun 25 16:20:33.049543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:33.049550 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:20:33.049557 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:20:33.049563 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:20:33.049587 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:20:33.049595 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:20:33.049602 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:20:33.049609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:20:33.049617 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:20:33.049624 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:20:33.049631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:20:33.049638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:20:33.049644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:20:33.049652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:20:33.049658 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:20:33.049665 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:20:33.049673 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:20:33.049680 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:20:33.049687 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:20:33.049693 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:20:33.049700 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:20:33.049707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:20:33.049717 systemd-journald[1100]: Journal started Jun 25 16:20:33.049751 systemd-journald[1100]: Runtime Journal (/run/log/journal/4fe5dbd71ccb44808610c69f69e74cf8) is 4.8M, max 38.7M, 33.9M free. Jun 25 16:20:32.361000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:20:32.638000 audit: BPF prog-id=10 op=LOAD Jun 25 16:20:32.638000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:20:32.638000 audit: BPF prog-id=11 op=LOAD Jun 25 16:20:32.638000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:20:32.909000 audit: BPF prog-id=12 op=LOAD Jun 25 16:20:32.909000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:20:32.909000 audit: BPF prog-id=13 op=LOAD Jun 25 16:20:32.909000 audit: BPF prog-id=14 op=LOAD Jun 25 16:20:32.909000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:20:32.909000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:20:32.909000 audit: BPF prog-id=15 op=LOAD Jun 25 16:20:32.909000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:20:32.909000 audit: BPF prog-id=16 op=LOAD Jun 25 16:20:32.909000 audit: BPF prog-id=17 op=LOAD Jun 25 16:20:32.909000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:20:32.909000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:20:32.910000 audit: BPF prog-id=18 op=LOAD Jun 25 16:20:32.910000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:20:32.910000 audit: BPF prog-id=19 op=LOAD Jun 25 16:20:32.910000 audit: BPF prog-id=20 op=LOAD Jun 25 16:20:32.910000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:20:32.910000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:20:32.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.913000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:20:32.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.991000 audit: BPF prog-id=21 op=LOAD Jun 25 16:20:32.991000 audit: BPF prog-id=22 op=LOAD Jun 25 16:20:32.991000 audit: BPF prog-id=23 op=LOAD Jun 25 16:20:32.991000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:20:32.991000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:20:33.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.034000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:20:33.034000 audit[1100]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcf8230640 a2=4000 a3=7ffcf82306dc items=0 ppid=1 pid=1100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:33.034000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:20:32.901120 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:20:33.060465 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:20:33.060483 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:20:33.060492 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:20:33.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:32.901126 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 16:20:32.911545 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:20:33.059535 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:20:33.060897 jq[1084]: true Jun 25 16:20:33.062483 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:20:33.065255 jq[1109]: true Jun 25 16:20:33.070433 systemd-journald[1100]: Time spent on flushing to /var/log/journal/4fe5dbd71ccb44808610c69f69e74cf8 is 27.755ms for 1956 entries. Jun 25 16:20:33.070433 systemd-journald[1100]: System Journal (/var/log/journal/4fe5dbd71ccb44808610c69f69e74cf8) is 8.0M, max 584.8M, 576.8M free. Jun 25 16:20:33.105679 systemd-journald[1100]: Received client request to flush runtime journal. Jun 25 16:20:33.105708 kernel: loop: module loaded Jun 25 16:20:33.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.076908 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:20:33.077093 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:20:33.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.106281 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:20:33.106564 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:20:33.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.118914 ignition[1116]: Ignition 2.15.0 Jun 25 16:20:33.123282 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:20:33.119158 ignition[1116]: deleting config from guestinfo properties Jun 25 16:20:33.123379 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:20:33.123609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:20:33.131376 ignition[1116]: Successfully deleted config Jun 25 16:20:33.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.133671 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jun 25 16:20:33.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.145872 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:20:33.148707 kernel: fuse: init (API version 7.37) Jun 25 16:20:33.148740 kernel: ACPI: bus type drm_connector registered Jun 25 16:20:33.148522 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:20:33.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.149381 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:20:33.149474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:20:33.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.151288 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:20:33.151383 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:20:33.152593 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:20:33.154577 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:20:33.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.168125 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:20:33.195418 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:20:33.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.199145 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:20:33.204961 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 16:20:33.526517 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:20:33.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.526000 audit: BPF prog-id=24 op=LOAD Jun 25 16:20:33.526000 audit: BPF prog-id=25 op=LOAD Jun 25 16:20:33.526000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:20:33.526000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:20:33.533168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:20:33.544622 systemd-udevd[1145]: Using default interface naming scheme 'v252'. Jun 25 16:20:33.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.557000 audit: BPF prog-id=26 op=LOAD Jun 25 16:20:33.557910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:20:33.564000 audit: BPF prog-id=27 op=LOAD Jun 25 16:20:33.564000 audit: BPF prog-id=28 op=LOAD Jun 25 16:20:33.564000 audit: BPF prog-id=29 op=LOAD Jun 25 16:20:33.564086 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:20:33.565933 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:20:33.585838 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:20:33.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.587379 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:20:33.598153 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1153) Jun 25 16:20:33.626088 systemd-networkd[1152]: lo: Link UP Jun 25 16:20:33.626093 systemd-networkd[1152]: lo: Gained carrier Jun 25 16:20:33.626326 systemd-networkd[1152]: Enumeration completed Jun 25 16:20:33.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.626376 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:20:33.626382 systemd-networkd[1152]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jun 25 16:20:33.628039 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 25 16:20:33.628155 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 25 16:20:33.629388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jun 25 16:20:33.629735 systemd-networkd[1152]: ens192: Link UP Jun 25 16:20:33.629823 systemd-networkd[1152]: ens192: Gained carrier Jun 25 16:20:33.630112 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:20:33.632024 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:20:33.637069 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:20:33.656017 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1150) Jun 25 16:20:33.684511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 25 16:20:33.700041 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jun 25 16:20:33.703017 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jun 25 16:20:33.705396 kernel: Guest personality initialized and is active Jun 25 16:20:33.705420 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 25 16:20:33.705433 kernel: Initialized host personality Jun 25 16:20:33.714018 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:20:33.726434 (udev-worker)[1157]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 25 16:20:33.736016 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:20:33.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.755286 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:20:33.763174 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:20:33.770428 lvm[1182]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:20:33.792468 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:20:33.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.792650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:20:33.799123 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:20:33.801566 lvm[1183]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:20:33.819538 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:20:33.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.819721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:20:33.819839 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:20:33.819860 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:20:33.819969 systemd[1]: Reached target machines.target - Containers. Jun 25 16:20:33.823165 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:20:33.823560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:20:33.823596 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:20:33.824582 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:20:33.825402 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:20:33.826515 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:20:33.828191 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:20:33.833584 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1185 (bootctl) Jun 25 16:20:33.836101 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:20:33.844624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:20:33.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.858342 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:20:33.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.860063 kernel: loop0: detected capacity change from 0 to 3000 Jun 25 16:20:33.881022 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:20:33.908017 kernel: loop1: detected capacity change from 0 to 210664 Jun 25 16:20:33.939184 kernel: loop2: detected capacity change from 0 to 80584 Jun 25 16:20:33.941446 systemd-fsck[1192]: fsck.fat 4.2 (2021-01-31) Jun 25 16:20:33.941446 systemd-fsck[1192]: /dev/sda1: 808 files, 120378/258078 clusters Jun 25 16:20:33.943361 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:20:33.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:33.975087 kernel: loop3: detected capacity change from 0 to 139360 Jun 25 16:20:34.013641 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:20:34.019083 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:20:34.054023 kernel: loop4: detected capacity change from 0 to 3000 Jun 25 16:20:34.126123 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:20:34.204024 kernel: loop5: detected capacity change from 0 to 210664 Jun 25 16:20:34.215563 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:20:34.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.440046 kernel: loop6: detected capacity change from 0 to 80584 Jun 25 16:20:34.631232 kernel: loop7: detected capacity change from 0 to 139360 Jun 25 16:20:34.656823 (sd-sysext)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jun 25 16:20:34.658024 (sd-sysext)[1199]: Merged extensions into '/usr'. Jun 25 16:20:34.658875 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:20:34.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.662422 systemd[1]: Starting ensure-sysext.service... Jun 25 16:20:34.664254 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:20:34.675168 systemd[1]: Reloading. Jun 25 16:20:34.678792 systemd-tmpfiles[1201]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:20:34.680787 systemd-tmpfiles[1201]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:20:34.682353 systemd-tmpfiles[1201]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:20:34.683932 systemd-tmpfiles[1201]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:20:34.740877 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:20:34.785188 systemd-networkd[1152]: ens192: Gained IPv6LL Jun 25 16:20:34.787391 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:20:34.798716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:20:34.840000 audit: BPF prog-id=30 op=LOAD Jun 25 16:20:34.840000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:20:34.840000 audit: BPF prog-id=31 op=LOAD Jun 25 16:20:34.840000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:20:34.840000 audit: BPF prog-id=32 op=LOAD Jun 25 16:20:34.840000 audit: BPF prog-id=33 op=LOAD Jun 25 16:20:34.840000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:20:34.840000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:20:34.841000 audit: BPF prog-id=34 op=LOAD Jun 25 16:20:34.841000 audit: BPF prog-id=35 op=LOAD Jun 25 16:20:34.841000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:20:34.841000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:20:34.841000 audit: BPF prog-id=36 op=LOAD Jun 25 16:20:34.841000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:20:34.841000 audit: BPF prog-id=37 op=LOAD Jun 25 16:20:34.841000 audit: BPF prog-id=38 op=LOAD Jun 25 16:20:34.841000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:20:34.842000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:20:34.844875 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:20:34.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.845321 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:20:34.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.851431 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:20:34.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.853706 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:20:34.859000 audit: BPF prog-id=39 op=LOAD Jun 25 16:20:34.860000 audit: BPF prog-id=40 op=LOAD Jun 25 16:20:34.857672 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:20:34.859076 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:20:34.860925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:20:34.862597 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:20:34.864835 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:20:34.866000 audit[1286]: SYSTEM_BOOT pid=1286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.872989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:20:34.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.875748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:34.879271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:20:34.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.880239 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:20:34.881195 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:20:34.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.881351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:20:34.881441 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:20:34.881517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:34.881966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:20:34.882151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:20:34.882504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:20:34.882605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:20:34.882925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:20:34.883657 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:20:34.883734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:20:34.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.885039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:34.891427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:20:34.892536 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:20:34.893588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:20:34.893755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:20:34.893834 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:20:34.893903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:34.894449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:20:34.894555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:20:34.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.894921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:20:34.895036 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:20:34.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.895377 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:20:34.897423 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:20:34.897507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:20:34.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.897861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:34.903334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:20:34.904332 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:20:34.905554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:20:34.905765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:20:34.905866 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:20:34.905966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:20:34.906532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:20:34.906641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:20:34.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.906992 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:20:34.908115 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:20:34.908205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:20:34.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.908878 systemd[1]: Finished ensure-sysext.service. Jun 25 16:20:34.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.909300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:20:34.909386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:20:34.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.909573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:20:34.910030 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:20:34.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.915200 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:20:34.922613 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:20:34.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.934939 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:20:34.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.935155 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:20:34.938098 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:20:34.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.938278 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:20:34.940382 augenrules[1308]: No rules Jun 25 16:20:34.939000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:20:34.939000 audit[1308]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3688e660 a2=420 a3=0 items=0 ppid=1277 pid=1308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:34.939000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:20:34.940917 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:20:34.949664 systemd-resolved[1283]: Positive Trust Anchors: Jun 25 16:20:34.949679 systemd-resolved[1283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:20:34.949699 systemd-resolved[1283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:20:34.952390 systemd-resolved[1283]: Defaulting to hostname 'linux'. Jun 25 16:20:34.953481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:20:34.953657 systemd[1]: Reached target network.target - Network. Jun 25 16:20:34.953751 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:20:34.953855 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:20:34.953967 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:20:34.954128 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:20:34.954253 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:20:34.954445 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:20:34.954603 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:20:34.954742 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:20:34.954855 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:20:34.954874 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:20:34.954963 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:20:34.955312 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:20:34.956273 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:20:34.961154 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:20:34.961406 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:20:34.961706 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:20:34.961862 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:20:34.961963 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:20:34.962094 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:20:34.962112 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:20:34.962951 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:20:34.963978 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jun 25 16:20:34.965510 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:20:34.966772 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:20:34.968892 jq[1318]: false Jun 25 16:20:34.969947 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:20:34.970110 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:20:34.971332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:20:34.973323 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:20:34.977137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:20:34.978380 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:20:34.979402 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:20:34.980405 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:20:34.982303 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:20:34.982439 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:20:34.982475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:21:29.596491 systemd-timesyncd[1285]: Contacted time server 108.181.220.94:123 (0.flatcar.pool.ntp.org). Jun 25 16:21:29.596519 systemd-timesyncd[1285]: Initial clock synchronization to Tue 2024-06-25 16:21:29.596425 UTC. Jun 25 16:21:29.596656 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:21:29.597685 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:21:29.601074 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:21:29.602446 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jun 25 16:21:29.605626 jq[1331]: true Jun 25 16:21:29.609373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:21:29.609505 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:21:29.610136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:21:29.616057 jq[1338]: true Jun 25 16:21:29.610248 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:21:29.631905 update_engine[1329]: I0625 16:21:29.607473 1329 main.cc:92] Flatcar Update Engine starting Jun 25 16:21:29.630458 systemd-resolved[1283]: Clock change detected. Flushing caches. Jun 25 16:21:29.634683 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jun 25 16:21:29.636348 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jun 25 16:21:29.643551 dbus-daemon[1317]: [system] SELinux support is enabled Jun 25 16:21:29.643667 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:21:29.645112 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:21:29.645135 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:21:29.645273 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:21:29.645286 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:21:29.649027 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jun 25 16:21:29.652159 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:21:29.652276 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:21:29.655641 update_engine[1329]: I0625 16:21:29.655537 1329 update_check_scheduler.cc:74] Next update check in 7m7s Jun 25 16:21:29.656176 tar[1340]: linux-amd64/helm Jun 25 16:21:29.656301 extend-filesystems[1319]: Found loop4 Jun 25 16:21:29.656301 extend-filesystems[1319]: Found loop5 Jun 25 16:21:29.656301 extend-filesystems[1319]: Found loop6 Jun 25 16:21:29.656301 extend-filesystems[1319]: Found loop7 Jun 25 16:21:29.657705 extend-filesystems[1319]: Found sda Jun 25 16:21:29.658252 extend-filesystems[1319]: Found sda1 Jun 25 16:21:29.658252 extend-filesystems[1319]: Found sda2 Jun 25 16:21:29.658743 extend-filesystems[1319]: Found sda3 Jun 25 16:21:29.658743 extend-filesystems[1319]: Found usr Jun 25 16:21:29.658743 extend-filesystems[1319]: Found sda4 Jun 25 16:21:29.658743 extend-filesystems[1319]: Found sda6 Jun 25 16:21:29.658743 extend-filesystems[1319]: Found sda7 Jun 25 16:21:29.658743 extend-filesystems[1319]: Found sda9 Jun 25 16:21:29.658743 extend-filesystems[1319]: Checking size of /dev/sda9 Jun 25 16:21:29.661929 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:21:29.666604 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:21:29.671138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:21:29.682464 unknown[1353]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jun 25 16:21:29.690046 unknown[1353]: Core dump limit set to -1 Jun 25 16:21:29.692754 extend-filesystems[1319]: Old size kept for /dev/sda9 Jun 25 16:21:29.693403 extend-filesystems[1319]: Found sr0 Jun 25 16:21:29.693077 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:21:29.693182 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:21:29.712929 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1146) Jun 25 16:21:29.712975 kernel: NET: Registered PF_VSOCK protocol family Jun 25 16:21:29.761663 systemd-logind[1327]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:21:29.761873 systemd-logind[1327]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:21:29.762350 bash[1376]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:21:29.762892 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:21:29.765098 systemd-logind[1327]: New seat seat0. Jun 25 16:21:29.766939 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:21:29.771705 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 16:21:29.771826 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jun 25 16:21:29.772827 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:21:29.777864 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:21:29.848371 locksmithd[1368]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:21:29.964601 containerd[1339]: time="2024-06-25T16:21:29.964551382Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:21:30.026273 containerd[1339]: time="2024-06-25T16:21:30.025652682Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:21:30.026273 containerd[1339]: time="2024-06-25T16:21:30.025688981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.027942508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.027960379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.028102037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.028112509Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.028162988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.028197496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.028205776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028301 containerd[1339]: time="2024-06-25T16:21:30.028243791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028440 containerd[1339]: time="2024-06-25T16:21:30.028363377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028440 containerd[1339]: time="2024-06-25T16:21:30.028374353Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:21:30.028440 containerd[1339]: time="2024-06-25T16:21:30.028382853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028518 containerd[1339]: time="2024-06-25T16:21:30.028437712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:21:30.028518 containerd[1339]: time="2024-06-25T16:21:30.028446109Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:21:30.028518 containerd[1339]: time="2024-06-25T16:21:30.028472454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:21:30.028518 containerd[1339]: time="2024-06-25T16:21:30.028490891Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033614353Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033642381Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033651185Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033668319Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033676849Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033682670Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033689955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033758381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033768353Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033776246Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033783412Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033790677Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033800586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035499 containerd[1339]: time="2024-06-25T16:21:30.033810969Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.033818576Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.033827365Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.033835438Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.033842678Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.033849076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.033902766Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034290405Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034335805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034349175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034415367Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034495212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034540023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034552756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035753 containerd[1339]: time="2024-06-25T16:21:30.034562911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034595913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034624945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034654072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034707836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034756563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034882623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034927498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.034963534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.035001174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.035011998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.035052137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.035075217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.035947 containerd[1339]: time="2024-06-25T16:21:30.035085057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:21:30.036141 containerd[1339]: time="2024-06-25T16:21:30.035377615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:21:30.036141 containerd[1339]: time="2024-06-25T16:21:30.035415959Z" level=info msg="Connect containerd service" Jun 25 16:21:30.036141 containerd[1339]: time="2024-06-25T16:21:30.035437332Z" level=info msg="using legacy CRI server" Jun 25 16:21:30.036141 containerd[1339]: time="2024-06-25T16:21:30.035444578Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:21:30.036543 containerd[1339]: time="2024-06-25T16:21:30.036529929Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:21:30.037427 containerd[1339]: time="2024-06-25T16:21:30.037414881Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:21:30.038467 containerd[1339]: time="2024-06-25T16:21:30.038455587Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:21:30.038567 containerd[1339]: time="2024-06-25T16:21:30.038547276Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:21:30.039058 containerd[1339]: time="2024-06-25T16:21:30.039047748Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:21:30.039106 containerd[1339]: time="2024-06-25T16:21:30.039097277Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:21:30.039300 containerd[1339]: time="2024-06-25T16:21:30.039290471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:21:30.039369 containerd[1339]: time="2024-06-25T16:21:30.038531488Z" level=info msg="Start subscribing containerd event" Jun 25 16:21:30.039395 containerd[1339]: time="2024-06-25T16:21:30.039376803Z" level=info msg="Start recovering state" Jun 25 16:21:30.039422 containerd[1339]: time="2024-06-25T16:21:30.039412596Z" level=info msg="Start event monitor" Jun 25 16:21:30.039442 containerd[1339]: time="2024-06-25T16:21:30.039424208Z" level=info msg="Start snapshots syncer" Jun 25 16:21:30.039442 containerd[1339]: time="2024-06-25T16:21:30.039430135Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:21:30.039442 containerd[1339]: time="2024-06-25T16:21:30.039434007Z" level=info msg="Start streaming server" Jun 25 16:21:30.039503 containerd[1339]: time="2024-06-25T16:21:30.039359631Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:21:30.040108 containerd[1339]: time="2024-06-25T16:21:30.039526592Z" level=info msg="containerd successfully booted in 0.075834s" Jun 25 16:21:30.039587 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:21:30.163983 tar[1340]: linux-amd64/LICENSE Jun 25 16:21:30.164071 tar[1340]: linux-amd64/README.md Jun 25 16:21:30.171221 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:21:30.495129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:21:30.552511 sshd_keygen[1363]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:21:30.565757 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:21:30.569731 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:21:30.573827 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:21:30.573947 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:21:30.575339 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:21:30.581688 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:21:30.585782 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:21:30.586953 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:21:30.587163 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:21:30.587293 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:21:30.588518 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:21:30.593914 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:21:30.594025 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:21:30.594212 systemd[1]: Startup finished in 996ms (kernel) + 4.666s (initrd) + 3.716s (userspace) = 9.379s. Jun 25 16:21:30.615122 login[1462]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:21:30.617601 login[1463]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:21:30.621125 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:21:30.626794 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:21:30.629087 systemd-logind[1327]: New session 1 of user core. Jun 25 16:21:30.631529 systemd-logind[1327]: New session 2 of user core. Jun 25 16:21:30.634645 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:21:30.638750 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:21:30.677474 (systemd)[1472]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:30.748211 systemd[1472]: Queued start job for default target default.target. Jun 25 16:21:30.754851 systemd[1472]: Reached target paths.target - Paths. Jun 25 16:21:30.754966 systemd[1472]: Reached target sockets.target - Sockets. Jun 25 16:21:30.755046 systemd[1472]: Reached target timers.target - Timers. Jun 25 16:21:30.755098 systemd[1472]: Reached target basic.target - Basic System. Jun 25 16:21:30.755168 systemd[1472]: Reached target default.target - Main User Target. Jun 25 16:21:30.755231 systemd[1472]: Startup finished in 73ms. Jun 25 16:21:30.755347 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:21:30.756408 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:21:30.757033 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:21:31.718902 kubelet[1450]: E0625 16:21:31.718870 1450 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:21:31.720469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:21:31.720691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:21:41.899116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:21:41.899236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:21:41.908649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:21:41.963302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:21:41.987990 kubelet[1496]: E0625 16:21:41.987956 1496 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:21:41.990264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:21:41.990343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:21:52.149147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:21:52.149295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:21:52.155639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:21:52.391190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:21:52.421268 kubelet[1506]: E0625 16:21:52.421211 1506 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:21:52.422471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:21:52.422561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:02.649227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:22:02.649392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:02.656721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:02.984922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:03.011917 kubelet[1516]: E0625 16:22:03.011883 1516 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:22:03.013080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:22:03.013157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:09.916814 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:22:09.917787 systemd[1]: Started sshd@0-139.178.70.109:22-139.178.68.195:46782.service - OpenSSH per-connection server daemon (139.178.68.195:46782). Jun 25 16:22:09.952797 sshd[1523]: Accepted publickey for core from 139.178.68.195 port 46782 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:22:09.953799 sshd[1523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:09.957236 systemd-logind[1327]: New session 3 of user core. Jun 25 16:22:09.964663 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:22:10.020174 systemd[1]: Started sshd@1-139.178.70.109:22-139.178.68.195:46792.service - OpenSSH per-connection server daemon (139.178.68.195:46792). Jun 25 16:22:10.050459 sshd[1528]: Accepted publickey for core from 139.178.68.195 port 46792 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:22:10.051274 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:10.055009 systemd-logind[1327]: New session 4 of user core. Jun 25 16:22:10.065705 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:22:10.116512 sshd[1528]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:10.126940 systemd[1]: sshd@1-139.178.70.109:22-139.178.68.195:46792.service: Deactivated successfully. Jun 25 16:22:10.127338 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:22:10.127748 systemd-logind[1327]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:22:10.128591 systemd[1]: Started sshd@2-139.178.70.109:22-139.178.68.195:46800.service - OpenSSH per-connection server daemon (139.178.68.195:46800). Jun 25 16:22:10.129246 systemd-logind[1327]: Removed session 4. Jun 25 16:22:10.157000 sshd[1534]: Accepted publickey for core from 139.178.68.195 port 46800 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:22:10.157780 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:10.160882 systemd-logind[1327]: New session 5 of user core. Jun 25 16:22:10.165580 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:22:10.214810 sshd[1534]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:10.225393 systemd[1]: sshd@2-139.178.70.109:22-139.178.68.195:46800.service: Deactivated successfully. Jun 25 16:22:10.225872 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:22:10.226393 systemd-logind[1327]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:22:10.231944 systemd[1]: Started sshd@3-139.178.70.109:22-139.178.68.195:46812.service - OpenSSH per-connection server daemon (139.178.68.195:46812). Jun 25 16:22:10.232510 systemd-logind[1327]: Removed session 5. Jun 25 16:22:10.261406 sshd[1540]: Accepted publickey for core from 139.178.68.195 port 46812 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:22:10.262266 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:10.264901 systemd-logind[1327]: New session 6 of user core. Jun 25 16:22:10.277613 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:22:10.327876 sshd[1540]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:10.334862 systemd[1]: sshd@3-139.178.70.109:22-139.178.68.195:46812.service: Deactivated successfully. Jun 25 16:22:10.335193 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:22:10.335547 systemd-logind[1327]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:22:10.336237 systemd[1]: Started sshd@4-139.178.70.109:22-139.178.68.195:46816.service - OpenSSH per-connection server daemon (139.178.68.195:46816). Jun 25 16:22:10.336786 systemd-logind[1327]: Removed session 6. Jun 25 16:22:10.364586 sshd[1546]: Accepted publickey for core from 139.178.68.195 port 46816 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:22:10.365356 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:10.367943 systemd-logind[1327]: New session 7 of user core. Jun 25 16:22:10.374594 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:22:10.430448 sudo[1549]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:22:10.430615 sudo[1549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:10.442676 sudo[1549]: pam_unix(sudo:session): session closed for user root Jun 25 16:22:10.443487 sshd[1546]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:10.450867 systemd[1]: sshd@4-139.178.70.109:22-139.178.68.195:46816.service: Deactivated successfully. Jun 25 16:22:10.451235 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:22:10.451585 systemd-logind[1327]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:22:10.452347 systemd[1]: Started sshd@5-139.178.70.109:22-139.178.68.195:46828.service - OpenSSH per-connection server daemon (139.178.68.195:46828). Jun 25 16:22:10.452895 systemd-logind[1327]: Removed session 7. Jun 25 16:22:10.480853 sshd[1553]: Accepted publickey for core from 139.178.68.195 port 46828 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:22:10.482149 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:10.484629 systemd-logind[1327]: New session 8 of user core. Jun 25 16:22:10.490580 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:22:10.539511 sudo[1557]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:22:10.539845 sudo[1557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:10.541687 sudo[1557]: pam_unix(sudo:session): session closed for user root Jun 25 16:22:10.544456 sudo[1556]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:22:10.544609 sudo[1556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:10.560729 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:22:10.560000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:22:10.561507 kernel: kauditd_printk_skb: 168 callbacks suppressed Jun 25 16:22:10.561534 kernel: audit: type=1305 audit(1719332530.560:213): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:22:10.560000 audit[1560]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed9a5ad30 a2=420 a3=0 items=0 ppid=1 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:10.563286 auditctl[1560]: No rules Jun 25 16:22:10.565719 kernel: audit: type=1300 audit(1719332530.560:213): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed9a5ad30 a2=420 a3=0 items=0 ppid=1 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:10.565752 kernel: audit: type=1327 audit(1719332530.560:213): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:22:10.560000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:22:10.563519 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:22:10.563623 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:22:10.565856 kernel: audit: type=1131 audit(1719332530.562:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.565311 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:22:10.580063 augenrules[1577]: No rules Jun 25 16:22:10.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.580588 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:22:10.582372 sudo[1556]: pam_unix(sudo:session): session closed for user root Jun 25 16:22:10.582527 kernel: audit: type=1130 audit(1719332530.580:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.581000 audit[1556]: USER_END pid=1556 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.581000 audit[1556]: CRED_DISP pid=1556 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.586347 kernel: audit: type=1106 audit(1719332530.581:216): pid=1556 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.586377 kernel: audit: type=1104 audit(1719332530.581:217): pid=1556 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.585971 sshd[1553]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:10.587246 kernel: audit: type=1106 audit(1719332530.586:218): pid=1553 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.586000 audit[1553]: USER_END pid=1553 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.591552 kernel: audit: type=1104 audit(1719332530.586:219): pid=1553 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.591594 kernel: audit: type=1131 audit(1719332530.588:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.109:22-139.178.68.195:46828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.586000 audit[1553]: CRED_DISP pid=1553 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.109:22-139.178.68.195:46828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.588815 systemd[1]: sshd@5-139.178.70.109:22-139.178.68.195:46828.service: Deactivated successfully. Jun 25 16:22:10.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.109:22-139.178.68.195:46840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.589138 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:22:10.590654 systemd[1]: Started sshd@6-139.178.70.109:22-139.178.68.195:46840.service - OpenSSH per-connection server daemon (139.178.68.195:46840). Jun 25 16:22:10.592518 systemd-logind[1327]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:22:10.593959 systemd-logind[1327]: Removed session 8. Jun 25 16:22:10.619000 audit[1583]: USER_ACCT pid=1583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.619857 sshd[1583]: Accepted publickey for core from 139.178.68.195 port 46840 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:22:10.620000 audit[1583]: CRED_ACQ pid=1583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.620000 audit[1583]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef544fb80 a2=3 a3=7f21e847f480 items=0 ppid=1 pid=1583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:10.620000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:10.620821 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:10.623340 systemd-logind[1327]: New session 9 of user core. Jun 25 16:22:10.633603 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:22:10.636000 audit[1583]: USER_START pid=1583 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.636000 audit[1585]: CRED_ACQ pid=1585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:10.683000 audit[1586]: USER_ACCT pid=1586 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.684063 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:22:10.683000 audit[1586]: CRED_REFR pid=1586 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.684246 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:10.684000 audit[1586]: USER_START pid=1586 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:10.797788 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:22:11.090008 dockerd[1595]: time="2024-06-25T16:22:11.089929992Z" level=info msg="Starting up" Jun 25 16:22:11.100127 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport703942199-merged.mount: Deactivated successfully. Jun 25 16:22:11.143387 systemd[1]: var-lib-docker-metacopy\x2dcheck2975697764-merged.mount: Deactivated successfully. Jun 25 16:22:11.149914 dockerd[1595]: time="2024-06-25T16:22:11.149896712Z" level=info msg="Loading containers: start." Jun 25 16:22:11.185000 audit[1628]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.185000 audit[1628]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffec9d28b80 a2=0 a3=7f2a667fee90 items=0 ppid=1595 pid=1628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.185000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:22:11.187000 audit[1630]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.187000 audit[1630]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffee91c76e0 a2=0 a3=7f8615204e90 items=0 ppid=1595 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.187000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:22:11.188000 audit[1632]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.188000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff33bba430 a2=0 a3=7fd7f8784e90 items=0 ppid=1595 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.188000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:22:11.189000 audit[1634]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.189000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc76b84cb0 a2=0 a3=7f039a6e4e90 items=0 ppid=1595 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.189000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:22:11.191000 audit[1636]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.191000 audit[1636]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6d2b6030 a2=0 a3=7fce4380ee90 items=0 ppid=1595 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.191000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:22:11.192000 audit[1638]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.192000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6a8bd300 a2=0 a3=7f17e0595e90 items=0 ppid=1595 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.192000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:22:11.198000 audit[1640]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.198000 audit[1640]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf36304f0 a2=0 a3=7fcd37bd2e90 items=0 ppid=1595 pid=1640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.198000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:22:11.199000 audit[1642]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.199000 audit[1642]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc4348c430 a2=0 a3=7fc900abae90 items=0 ppid=1595 pid=1642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.199000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:22:11.201000 audit[1644]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.201000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe291dcba0 a2=0 a3=7ffadb0c7e90 items=0 ppid=1595 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.201000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:11.204000 audit[1648]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.204000 audit[1648]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe1ff4ad90 a2=0 a3=7f7862a80e90 items=0 ppid=1595 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.204000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:11.205000 audit[1649]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.205000 audit[1649]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc4ffd9cf0 a2=0 a3=7f65a1547e90 items=0 ppid=1595 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.205000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:11.211498 kernel: Initializing XFRM netlink socket Jun 25 16:22:11.234000 audit[1657]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.234000 audit[1657]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe859e02a0 a2=0 a3=7ffaa3651e90 items=0 ppid=1595 pid=1657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.234000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:22:11.248000 audit[1660]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.248000 audit[1660]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff14f9f890 a2=0 a3=7f689f588e90 items=0 ppid=1595 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.248000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:22:11.250000 audit[1664]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.250000 audit[1664]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcdb0d7610 a2=0 a3=7f9b9ef80e90 items=0 ppid=1595 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.250000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:22:11.251000 audit[1666]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.251000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc956a7630 a2=0 a3=7fea7279de90 items=0 ppid=1595 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.251000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:22:11.253000 audit[1668]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.253000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd7fa4dd50 a2=0 a3=7f07b1335e90 items=0 ppid=1595 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.253000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:22:11.254000 audit[1670]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.254000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd57be1cb0 a2=0 a3=7ff5e3a24e90 items=0 ppid=1595 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.254000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:22:11.255000 audit[1672]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.255000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff517b86b0 a2=0 a3=7fee950a6e90 items=0 ppid=1595 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.255000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:22:11.260000 audit[1675]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.260000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd5a5bff60 a2=0 a3=7f228c796e90 items=0 ppid=1595 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:22:11.261000 audit[1677]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.261000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc9fc02040 a2=0 a3=7f18e493ae90 items=0 ppid=1595 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.261000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:22:11.262000 audit[1679]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.262000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd67164010 a2=0 a3=7f2e694c5e90 items=0 ppid=1595 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:22:11.263000 audit[1681]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.263000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe44d583b0 a2=0 a3=7f74ec505e90 items=0 ppid=1595 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.263000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:22:11.264589 systemd-networkd[1152]: docker0: Link UP Jun 25 16:22:11.268000 audit[1685]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.268000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc1ca528c0 a2=0 a3=7f4f97ce2e90 items=0 ppid=1595 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.268000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:11.268000 audit[1686]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:11.268000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd56b31890 a2=0 a3=7f810d575e90 items=0 ppid=1595 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:11.268000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:11.269708 dockerd[1595]: time="2024-06-25T16:22:11.269693490Z" level=info msg="Loading containers: done." Jun 25 16:22:11.321167 dockerd[1595]: time="2024-06-25T16:22:11.321127591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:22:11.321373 dockerd[1595]: time="2024-06-25T16:22:11.321362800Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:22:11.321467 dockerd[1595]: time="2024-06-25T16:22:11.321459150Z" level=info msg="Daemon has completed initialization" Jun 25 16:22:11.332359 dockerd[1595]: time="2024-06-25T16:22:11.332333051Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:22:11.332416 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:22:11.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:11.929247 containerd[1339]: time="2024-06-25T16:22:11.929197418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 16:22:12.551028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010776461.mount: Deactivated successfully. Jun 25 16:22:13.149109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 16:22:13.149231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:13.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.165761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:13.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.225336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:13.262842 kubelet[1784]: E0625 16:22:13.262820 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:22:13.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:22:13.264008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:22:13.264087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:14.036718 containerd[1339]: time="2024-06-25T16:22:14.036690417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:14.037110 containerd[1339]: time="2024-06-25T16:22:14.037084875Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jun 25 16:22:14.037625 containerd[1339]: time="2024-06-25T16:22:14.037610725Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:14.038793 containerd[1339]: time="2024-06-25T16:22:14.038778415Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:14.039955 containerd[1339]: time="2024-06-25T16:22:14.039942961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:14.041043 containerd[1339]: time="2024-06-25T16:22:14.041027613Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 2.111779685s" Jun 25 16:22:14.041110 containerd[1339]: time="2024-06-25T16:22:14.041098834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 16:22:14.053677 containerd[1339]: time="2024-06-25T16:22:14.053657640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 16:22:15.218537 update_engine[1329]: I0625 16:22:15.218503 1329 update_attempter.cc:509] Updating boot flags... Jun 25 16:22:15.639496 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1811) Jun 25 16:22:15.727235 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1814) Jun 25 16:22:15.762748 containerd[1339]: time="2024-06-25T16:22:15.762720185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:15.764070 containerd[1339]: time="2024-06-25T16:22:15.764038614Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jun 25 16:22:15.764465 containerd[1339]: time="2024-06-25T16:22:15.764441818Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:15.765461 containerd[1339]: time="2024-06-25T16:22:15.765439394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:15.766591 containerd[1339]: time="2024-06-25T16:22:15.766567992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:15.767221 containerd[1339]: time="2024-06-25T16:22:15.767204898Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 1.713428137s" Jun 25 16:22:15.767275 containerd[1339]: time="2024-06-25T16:22:15.767264352Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 16:22:15.780602 containerd[1339]: time="2024-06-25T16:22:15.780582566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 16:22:17.067458 containerd[1339]: time="2024-06-25T16:22:17.067431266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:17.067875 containerd[1339]: time="2024-06-25T16:22:17.067849184Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jun 25 16:22:17.068339 containerd[1339]: time="2024-06-25T16:22:17.068326004Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:17.070212 containerd[1339]: time="2024-06-25T16:22:17.070196694Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:17.071462 containerd[1339]: time="2024-06-25T16:22:17.070860741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:17.071514 containerd[1339]: time="2024-06-25T16:22:17.071444033Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.290754323s" Jun 25 16:22:17.071543 containerd[1339]: time="2024-06-25T16:22:17.071516273Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 16:22:17.089950 containerd[1339]: time="2024-06-25T16:22:17.089910323Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 16:22:18.124772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418748891.mount: Deactivated successfully. Jun 25 16:22:18.430639 containerd[1339]: time="2024-06-25T16:22:18.430610052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:18.436584 containerd[1339]: time="2024-06-25T16:22:18.436558404Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jun 25 16:22:18.438945 containerd[1339]: time="2024-06-25T16:22:18.438926441Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:18.444377 containerd[1339]: time="2024-06-25T16:22:18.444360704Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:18.449827 containerd[1339]: time="2024-06-25T16:22:18.449809737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:18.450624 containerd[1339]: time="2024-06-25T16:22:18.450602159Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 1.360651596s" Jun 25 16:22:18.450655 containerd[1339]: time="2024-06-25T16:22:18.450629011Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 16:22:18.464089 containerd[1339]: time="2024-06-25T16:22:18.464062297Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 16:22:18.982774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078587262.mount: Deactivated successfully. Jun 25 16:22:19.806053 containerd[1339]: time="2024-06-25T16:22:19.806017474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:19.811547 containerd[1339]: time="2024-06-25T16:22:19.811510008Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 16:22:19.820791 containerd[1339]: time="2024-06-25T16:22:19.820759364Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:19.835140 containerd[1339]: time="2024-06-25T16:22:19.835116655Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:19.840253 containerd[1339]: time="2024-06-25T16:22:19.840223631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:19.842110 containerd[1339]: time="2024-06-25T16:22:19.842080422Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.377989402s" Jun 25 16:22:19.842155 containerd[1339]: time="2024-06-25T16:22:19.842111647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 16:22:19.859800 containerd[1339]: time="2024-06-25T16:22:19.859776564Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:22:20.318952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153429119.mount: Deactivated successfully. Jun 25 16:22:20.321293 containerd[1339]: time="2024-06-25T16:22:20.321276240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:20.321644 containerd[1339]: time="2024-06-25T16:22:20.321619713Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:22:20.321871 containerd[1339]: time="2024-06-25T16:22:20.321858803Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:20.322801 containerd[1339]: time="2024-06-25T16:22:20.322790881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:20.323574 containerd[1339]: time="2024-06-25T16:22:20.323562189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:20.324024 containerd[1339]: time="2024-06-25T16:22:20.324007488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 464.095ms" Jun 25 16:22:20.324057 containerd[1339]: time="2024-06-25T16:22:20.324025798Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:22:20.336928 containerd[1339]: time="2024-06-25T16:22:20.336898878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 16:22:20.794945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308172364.mount: Deactivated successfully. Jun 25 16:22:22.747689 containerd[1339]: time="2024-06-25T16:22:22.747654363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:22.753683 containerd[1339]: time="2024-06-25T16:22:22.753654985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jun 25 16:22:22.761336 containerd[1339]: time="2024-06-25T16:22:22.761317304Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:22.766569 containerd[1339]: time="2024-06-25T16:22:22.766540496Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:22.785841 containerd[1339]: time="2024-06-25T16:22:22.785820405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:22.786900 containerd[1339]: time="2024-06-25T16:22:22.786872977Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.449843243s" Jun 25 16:22:22.786988 containerd[1339]: time="2024-06-25T16:22:22.786974451Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 16:22:23.399122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 16:22:23.399270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:23.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:23.402627 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:22:23.402659 kernel: audit: type=1130 audit(1719332543.398:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:23.402675 kernel: audit: type=1131 audit(1719332543.398:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:23.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:23.405710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:23.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:23.760576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:23.762684 kernel: audit: type=1130 audit(1719332543.760:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:23.819336 kubelet[2007]: E0625 16:22:23.819313 2007 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:22:23.822525 kernel: audit: type=1131 audit(1719332543.820:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:22:23.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:22:23.820514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:22:23.820595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:25.209912 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:25.216744 kernel: audit: type=1130 audit(1719332545.209:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:25.216768 kernel: audit: type=1131 audit(1719332545.209:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:25.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:25.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:25.216710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:25.231103 systemd[1]: Reloading. Jun 25 16:22:25.341238 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:22:25.352810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:22:25.392000 audit: BPF prog-id=44 op=LOAD Jun 25 16:22:25.392000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:22:25.394501 kernel: audit: type=1334 audit(1719332545.392:265): prog-id=44 op=LOAD Jun 25 16:22:25.394532 kernel: audit: type=1334 audit(1719332545.392:266): prog-id=30 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=45 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:22:25.395619 kernel: audit: type=1334 audit(1719332545.394:267): prog-id=45 op=LOAD Jun 25 16:22:25.395653 kernel: audit: type=1334 audit(1719332545.394:268): prog-id=41 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=46 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=47 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=48 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=49 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=50 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=51 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=52 op=LOAD Jun 25 16:22:25.394000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:22:25.394000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:22:25.395000 audit: BPF prog-id=53 op=LOAD Jun 25 16:22:25.395000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:22:25.397000 audit: BPF prog-id=54 op=LOAD Jun 25 16:22:25.397000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:22:25.397000 audit: BPF prog-id=55 op=LOAD Jun 25 16:22:25.397000 audit: BPF prog-id=56 op=LOAD Jun 25 16:22:25.397000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:22:25.397000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:22:25.398000 audit: BPF prog-id=57 op=LOAD Jun 25 16:22:25.398000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:22:25.406942 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:22:25.407059 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:22:25.407291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:25.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:22:25.412988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:25.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:25.735814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:25.769790 kubelet[2098]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:22:25.769790 kubelet[2098]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:22:25.769790 kubelet[2098]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:22:25.784982 kubelet[2098]: I0625 16:22:25.784948 2098 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:22:26.058706 kubelet[2098]: I0625 16:22:26.058651 2098 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:22:26.058785 kubelet[2098]: I0625 16:22:26.058778 2098 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:22:26.058970 kubelet[2098]: I0625 16:22:26.058941 2098 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:22:26.072904 kubelet[2098]: I0625 16:22:26.072896 2098 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:22:26.075062 kubelet[2098]: E0625 16:22:26.075050 2098 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.085798 kubelet[2098]: I0625 16:22:26.085788 2098 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:22:26.089177 kubelet[2098]: I0625 16:22:26.089080 2098 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:22:26.089332 kubelet[2098]: I0625 16:22:26.089224 2098 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:22:26.089757 kubelet[2098]: I0625 16:22:26.089748 2098 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:22:26.089803 kubelet[2098]: I0625 16:22:26.089798 2098 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:22:26.089898 kubelet[2098]: I0625 16:22:26.089891 2098 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:22:26.090615 kubelet[2098]: I0625 16:22:26.090607 2098 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:22:26.090697 kubelet[2098]: I0625 16:22:26.090690 2098 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:22:26.090743 kubelet[2098]: I0625 16:22:26.090738 2098 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:22:26.090788 kubelet[2098]: I0625 16:22:26.090782 2098 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:22:26.092362 kubelet[2098]: W0625 16:22:26.092334 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.092392 kubelet[2098]: E0625 16:22:26.092366 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.093007 kubelet[2098]: W0625 16:22:26.092968 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.093064 kubelet[2098]: E0625 16:22:26.093057 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.093141 kubelet[2098]: I0625 16:22:26.093133 2098 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:22:26.094275 kubelet[2098]: I0625 16:22:26.094267 2098 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:22:26.094339 kubelet[2098]: W0625 16:22:26.094333 2098 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:22:26.094773 kubelet[2098]: I0625 16:22:26.094766 2098 server.go:1264] "Started kubelet" Jun 25 16:22:26.099354 kubelet[2098]: I0625 16:22:26.099330 2098 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:22:26.100095 kubelet[2098]: I0625 16:22:26.100066 2098 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:22:26.100295 kubelet[2098]: I0625 16:22:26.100288 2098 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:22:26.100455 kubelet[2098]: E0625 16:22:26.100396 2098 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc4bd75cbeef1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 16:22:26.09475561 +0000 UTC m=+0.354828656,LastTimestamp:2024-06-25 16:22:26.09475561 +0000 UTC m=+0.354828656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 16:22:26.100960 kubelet[2098]: I0625 16:22:26.100903 2098 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:22:26.102024 kubelet[2098]: I0625 16:22:26.102015 2098 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:22:26.104000 audit[2109]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.104000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffca6ffb510 a2=0 a3=7fa4c9a79e90 items=0 ppid=2098 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:22:26.105000 audit[2110]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.105000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6c3c5680 a2=0 a3=7fe4d7ed4e90 items=0 ppid=2098 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:22:26.106675 kubelet[2098]: E0625 16:22:26.106663 2098 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:22:26.106740 kubelet[2098]: E0625 16:22:26.106729 2098 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:22:26.106768 kubelet[2098]: I0625 16:22:26.106750 2098 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:22:26.106816 kubelet[2098]: I0625 16:22:26.106806 2098 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:22:26.106846 kubelet[2098]: I0625 16:22:26.106830 2098 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:22:26.107028 kubelet[2098]: W0625 16:22:26.107005 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.107054 kubelet[2098]: E0625 16:22:26.107033 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.107323 kubelet[2098]: I0625 16:22:26.107312 2098 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:22:26.107364 kubelet[2098]: I0625 16:22:26.107354 2098 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:22:26.107545 kubelet[2098]: E0625 16:22:26.107524 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="200ms" Jun 25 16:22:26.108000 audit[2112]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.108000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe7fe4e830 a2=0 a3=7f80609f1e90 items=0 ppid=2098 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:22:26.108948 kubelet[2098]: I0625 16:22:26.108936 2098 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:22:26.110000 audit[2114]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.110000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdbb1bbe80 a2=0 a3=7fca8f45ae90 items=0 ppid=2098 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:22:26.116588 kubelet[2098]: I0625 16:22:26.116570 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:22:26.115000 audit[2117]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.115000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdc87a6210 a2=0 a3=7f03bd096e90 items=0 ppid=2098 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:22:26.116000 audit[2118]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:26.116000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7bb50090 a2=0 a3=7fe620db0e90 items=0 ppid=2098 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:22:26.117429 kubelet[2098]: I0625 16:22:26.117421 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:22:26.117496 kubelet[2098]: I0625 16:22:26.117477 2098 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:22:26.117540 kubelet[2098]: I0625 16:22:26.117535 2098 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:22:26.117600 kubelet[2098]: E0625 16:22:26.117591 2098 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:22:26.118000 audit[2120]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.118000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd10aa84a0 a2=0 a3=7f33d436ae90 items=0 ppid=2098 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:22:26.119000 audit[2121]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.119000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde4115a30 a2=0 a3=7fdb2ce24e90 items=0 ppid=2098 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:22:26.120000 audit[2122]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:26.120000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa7a792b0 a2=0 a3=7fe748d78e90 items=0 ppid=2098 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:22:26.121000 audit[2123]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:26.121000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc897d78f0 a2=0 a3=7fcb61a7fe90 items=0 ppid=2098 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:22:26.121000 audit[2124]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:26.121000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffdfce9d150 a2=0 a3=7f8c7982de90 items=0 ppid=2098 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:22:26.122000 audit[2125]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:26.122000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc968b69b0 a2=0 a3=7fc7f41ffe90 items=0 ppid=2098 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:26.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:22:26.125333 kubelet[2098]: W0625 16:22:26.123372 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.125333 kubelet[2098]: E0625 16:22:26.123396 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.131153 kubelet[2098]: I0625 16:22:26.131142 2098 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:22:26.131153 kubelet[2098]: I0625 16:22:26.131150 2098 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:22:26.131225 kubelet[2098]: I0625 16:22:26.131159 2098 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:22:26.140893 kubelet[2098]: I0625 16:22:26.140877 2098 policy_none.go:49] "None policy: Start" Jun 25 16:22:26.141279 kubelet[2098]: I0625 16:22:26.141254 2098 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:22:26.141329 kubelet[2098]: I0625 16:22:26.141283 2098 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:22:26.195876 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:22:26.207892 kubelet[2098]: I0625 16:22:26.207841 2098 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:26.208742 kubelet[2098]: E0625 16:22:26.208409 2098 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jun 25 16:22:26.209526 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:22:26.212309 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:22:26.222872 kubelet[2098]: E0625 16:22:26.222856 2098 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:22:26.227236 kubelet[2098]: I0625 16:22:26.227223 2098 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:22:26.227438 kubelet[2098]: I0625 16:22:26.227416 2098 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:22:26.227599 kubelet[2098]: I0625 16:22:26.227591 2098 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:22:26.229366 kubelet[2098]: E0625 16:22:26.229355 2098 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:22:26.308572 kubelet[2098]: E0625 16:22:26.308536 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="400ms" Jun 25 16:22:26.409814 kubelet[2098]: I0625 16:22:26.409796 2098 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:26.410010 kubelet[2098]: E0625 16:22:26.409991 2098 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jun 25 16:22:26.423241 kubelet[2098]: I0625 16:22:26.423211 2098 topology_manager.go:215] "Topology Admit Handler" podUID="77d4f879911fdc4d972ad329af62281f" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:22:26.424042 kubelet[2098]: I0625 16:22:26.424029 2098 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:22:26.424944 kubelet[2098]: I0625 16:22:26.424932 2098 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:22:26.429198 systemd[1]: Created slice kubepods-burstable-pod77d4f879911fdc4d972ad329af62281f.slice - libcontainer container kubepods-burstable-pod77d4f879911fdc4d972ad329af62281f.slice. Jun 25 16:22:26.447715 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jun 25 16:22:26.458196 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jun 25 16:22:26.507936 kubelet[2098]: I0625 16:22:26.507889 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77d4f879911fdc4d972ad329af62281f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77d4f879911fdc4d972ad329af62281f\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:26.508069 kubelet[2098]: I0625 16:22:26.508056 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77d4f879911fdc4d972ad329af62281f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77d4f879911fdc4d972ad329af62281f\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:26.508167 kubelet[2098]: I0625 16:22:26.508157 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:26.508251 kubelet[2098]: I0625 16:22:26.508240 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:26.508334 kubelet[2098]: I0625 16:22:26.508324 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:22:26.508437 kubelet[2098]: I0625 16:22:26.508426 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77d4f879911fdc4d972ad329af62281f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77d4f879911fdc4d972ad329af62281f\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:26.508548 kubelet[2098]: I0625 16:22:26.508537 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:26.508654 kubelet[2098]: I0625 16:22:26.508634 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:26.508705 kubelet[2098]: I0625 16:22:26.508667 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:26.710268 kubelet[2098]: E0625 16:22:26.709723 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="800ms" Jun 25 16:22:26.747891 containerd[1339]: time="2024-06-25T16:22:26.747859988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77d4f879911fdc4d972ad329af62281f,Namespace:kube-system,Attempt:0,}" Jun 25 16:22:26.762576 containerd[1339]: time="2024-06-25T16:22:26.762531717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jun 25 16:22:26.762719 containerd[1339]: time="2024-06-25T16:22:26.762531770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jun 25 16:22:26.810632 kubelet[2098]: I0625 16:22:26.810611 2098 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:26.810921 kubelet[2098]: E0625 16:22:26.810895 2098 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jun 25 16:22:26.941657 kubelet[2098]: W0625 16:22:26.941618 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:26.941657 kubelet[2098]: E0625 16:22:26.941657 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:27.167886 kubelet[2098]: W0625 16:22:27.167849 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:27.167968 kubelet[2098]: E0625 16:22:27.167890 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:27.258981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772542657.mount: Deactivated successfully. Jun 25 16:22:27.261478 containerd[1339]: time="2024-06-25T16:22:27.261455544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.262812 containerd[1339]: time="2024-06-25T16:22:27.262735103Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:22:27.263309 containerd[1339]: time="2024-06-25T16:22:27.263281895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.263536 containerd[1339]: time="2024-06-25T16:22:27.263364587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:22:27.264254 containerd[1339]: time="2024-06-25T16:22:27.264241870Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.265113 containerd[1339]: time="2024-06-25T16:22:27.265060692Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.265521 containerd[1339]: time="2024-06-25T16:22:27.265478599Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.265673 containerd[1339]: time="2024-06-25T16:22:27.265654258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:22:27.265894 containerd[1339]: time="2024-06-25T16:22:27.265880870Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.267846 containerd[1339]: time="2024-06-25T16:22:27.267831150Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.269432 containerd[1339]: time="2024-06-25T16:22:27.269412561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.795121ms" Jun 25 16:22:27.269741 containerd[1339]: time="2024-06-25T16:22:27.269719449Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.270304 containerd[1339]: time="2024-06-25T16:22:27.270289073Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 520.290211ms" Jun 25 16:22:27.270861 containerd[1339]: time="2024-06-25T16:22:27.270837471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.098087ms" Jun 25 16:22:27.273283 containerd[1339]: time="2024-06-25T16:22:27.273270671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.273757 containerd[1339]: time="2024-06-25T16:22:27.273744887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.274300 containerd[1339]: time="2024-06-25T16:22:27.274288214Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.274821 containerd[1339]: time="2024-06-25T16:22:27.274810015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:22:27.297964 kubelet[2098]: W0625 16:22:27.297930 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:27.298025 kubelet[2098]: E0625 16:22:27.297967 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:27.379115 containerd[1339]: time="2024-06-25T16:22:27.379060339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:27.379275 containerd[1339]: time="2024-06-25T16:22:27.379259079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:27.379377 containerd[1339]: time="2024-06-25T16:22:27.379349304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:27.379464 containerd[1339]: time="2024-06-25T16:22:27.379450201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:27.380153 containerd[1339]: time="2024-06-25T16:22:27.380018257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:27.380153 containerd[1339]: time="2024-06-25T16:22:27.380045136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:27.380153 containerd[1339]: time="2024-06-25T16:22:27.380057528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:27.380153 containerd[1339]: time="2024-06-25T16:22:27.380066521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:27.389055 containerd[1339]: time="2024-06-25T16:22:27.388926190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:27.389055 containerd[1339]: time="2024-06-25T16:22:27.388953620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:27.389055 containerd[1339]: time="2024-06-25T16:22:27.388962950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:27.389055 containerd[1339]: time="2024-06-25T16:22:27.388968395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:27.399611 systemd[1]: Started cri-containerd-e9ad5acbb66f0ce8327b7da624d6ca80b42e7490b60431ca7baf52e46d916957.scope - libcontainer container e9ad5acbb66f0ce8327b7da624d6ca80b42e7490b60431ca7baf52e46d916957. Jun 25 16:22:27.401299 systemd[1]: Started cri-containerd-1e18131fe62d77547992d09419dd741277d688d247975e5fa72e4ebd4a67247a.scope - libcontainer container 1e18131fe62d77547992d09419dd741277d688d247975e5fa72e4ebd4a67247a. Jun 25 16:22:27.409055 kubelet[2098]: W0625 16:22:27.405705 2098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:27.409055 kubelet[2098]: E0625 16:22:27.405754 2098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:27.409000 audit: BPF prog-id=58 op=LOAD Jun 25 16:22:27.410000 audit: BPF prog-id=59 op=LOAD Jun 25 16:22:27.410000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2152 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616435616362623636663063653833323762376461363234643663 Jun 25 16:22:27.410000 audit: BPF prog-id=60 op=LOAD Jun 25 16:22:27.410000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2152 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616435616362623636663063653833323762376461363234643663 Jun 25 16:22:27.410000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:22:27.410000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:22:27.410000 audit: BPF prog-id=61 op=LOAD Jun 25 16:22:27.410000 audit[2177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2152 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616435616362623636663063653833323762376461363234643663 Jun 25 16:22:27.415588 systemd[1]: Started cri-containerd-9ea33dc832f8275c89181c5b6dc35f7c5a931725b45420fb73b4b0dff69c5a43.scope - libcontainer container 9ea33dc832f8275c89181c5b6dc35f7c5a931725b45420fb73b4b0dff69c5a43. Jun 25 16:22:27.417000 audit: BPF prog-id=62 op=LOAD Jun 25 16:22:27.417000 audit: BPF prog-id=63 op=LOAD Jun 25 16:22:27.417000 audit[2178]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2154 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165313831333166653632643737353437393932643039343139646437 Jun 25 16:22:27.418000 audit: BPF prog-id=64 op=LOAD Jun 25 16:22:27.418000 audit[2178]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2154 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165313831333166653632643737353437393932643039343139646437 Jun 25 16:22:27.418000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:22:27.418000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:22:27.418000 audit: BPF prog-id=65 op=LOAD Jun 25 16:22:27.418000 audit[2178]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2154 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165313831333166653632643737353437393932643039343139646437 Jun 25 16:22:27.425000 audit: BPF prog-id=66 op=LOAD Jun 25 16:22:27.425000 audit: BPF prog-id=67 op=LOAD Jun 25 16:22:27.425000 audit[2201]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2176 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613333646338333266383237356338393138316335623664633335 Jun 25 16:22:27.425000 audit: BPF prog-id=68 op=LOAD Jun 25 16:22:27.425000 audit[2201]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2176 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613333646338333266383237356338393138316335623664633335 Jun 25 16:22:27.425000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:22:27.425000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:22:27.425000 audit: BPF prog-id=69 op=LOAD Jun 25 16:22:27.425000 audit[2201]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2176 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613333646338333266383237356338393138316335623664633335 Jun 25 16:22:27.452584 containerd[1339]: time="2024-06-25T16:22:27.452561514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ea33dc832f8275c89181c5b6dc35f7c5a931725b45420fb73b4b0dff69c5a43\"" Jun 25 16:22:27.455117 containerd[1339]: time="2024-06-25T16:22:27.455100430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77d4f879911fdc4d972ad329af62281f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e18131fe62d77547992d09419dd741277d688d247975e5fa72e4ebd4a67247a\"" Jun 25 16:22:27.456651 containerd[1339]: time="2024-06-25T16:22:27.456635541Z" level=info msg="CreateContainer within sandbox \"9ea33dc832f8275c89181c5b6dc35f7c5a931725b45420fb73b4b0dff69c5a43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:22:27.458660 containerd[1339]: time="2024-06-25T16:22:27.458646657Z" level=info msg="CreateContainer within sandbox \"1e18131fe62d77547992d09419dd741277d688d247975e5fa72e4ebd4a67247a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:22:27.465477 containerd[1339]: time="2024-06-25T16:22:27.465456010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9ad5acbb66f0ce8327b7da624d6ca80b42e7490b60431ca7baf52e46d916957\"" Jun 25 16:22:27.466894 containerd[1339]: time="2024-06-25T16:22:27.466876369Z" level=info msg="CreateContainer within sandbox \"e9ad5acbb66f0ce8327b7da624d6ca80b42e7490b60431ca7baf52e46d916957\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:22:27.471230 containerd[1339]: time="2024-06-25T16:22:27.471211060Z" level=info msg="CreateContainer within sandbox \"9ea33dc832f8275c89181c5b6dc35f7c5a931725b45420fb73b4b0dff69c5a43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"21a07c2560c5c2f12b0e1c122510d2359f0794e33f373fc92e80a235b10c28ab\"" Jun 25 16:22:27.471962 containerd[1339]: time="2024-06-25T16:22:27.471943444Z" level=info msg="StartContainer for \"21a07c2560c5c2f12b0e1c122510d2359f0794e33f373fc92e80a235b10c28ab\"" Jun 25 16:22:27.474388 containerd[1339]: time="2024-06-25T16:22:27.474371858Z" level=info msg="CreateContainer within sandbox \"1e18131fe62d77547992d09419dd741277d688d247975e5fa72e4ebd4a67247a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60d359d3b7b8f7f93f1633f2b039b16adb17b1ddf9c2864058a36113c34584aa\"" Jun 25 16:22:27.477172 containerd[1339]: time="2024-06-25T16:22:27.477152602Z" level=info msg="CreateContainer within sandbox \"e9ad5acbb66f0ce8327b7da624d6ca80b42e7490b60431ca7baf52e46d916957\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4aef552c7412052751c0693dc5e41184009a1e9ac476d2abd58784f3306c5f90\"" Jun 25 16:22:27.477461 containerd[1339]: time="2024-06-25T16:22:27.477448133Z" level=info msg="StartContainer for \"60d359d3b7b8f7f93f1633f2b039b16adb17b1ddf9c2864058a36113c34584aa\"" Jun 25 16:22:27.478699 containerd[1339]: time="2024-06-25T16:22:27.478316303Z" level=info msg="StartContainer for \"4aef552c7412052751c0693dc5e41184009a1e9ac476d2abd58784f3306c5f90\"" Jun 25 16:22:27.496578 systemd[1]: Started cri-containerd-21a07c2560c5c2f12b0e1c122510d2359f0794e33f373fc92e80a235b10c28ab.scope - libcontainer container 21a07c2560c5c2f12b0e1c122510d2359f0794e33f373fc92e80a235b10c28ab. Jun 25 16:22:27.498319 systemd[1]: Started cri-containerd-4aef552c7412052751c0693dc5e41184009a1e9ac476d2abd58784f3306c5f90.scope - libcontainer container 4aef552c7412052751c0693dc5e41184009a1e9ac476d2abd58784f3306c5f90. Jun 25 16:22:27.506000 audit: BPF prog-id=70 op=LOAD Jun 25 16:22:27.506000 audit: BPF prog-id=71 op=LOAD Jun 25 16:22:27.506000 audit[2283]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2176 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613037633235363063356332663132623065316331323235313064 Jun 25 16:22:27.506000 audit: BPF prog-id=72 op=LOAD Jun 25 16:22:27.506000 audit[2283]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2176 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613037633235363063356332663132623065316331323235313064 Jun 25 16:22:27.506000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:22:27.506000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:22:27.506000 audit: BPF prog-id=73 op=LOAD Jun 25 16:22:27.506000 audit[2283]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2176 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613037633235363063356332663132623065316331323235313064 Jun 25 16:22:27.510306 kubelet[2098]: E0625 16:22:27.510275 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="1.6s" Jun 25 16:22:27.514605 systemd[1]: Started cri-containerd-60d359d3b7b8f7f93f1633f2b039b16adb17b1ddf9c2864058a36113c34584aa.scope - libcontainer container 60d359d3b7b8f7f93f1633f2b039b16adb17b1ddf9c2864058a36113c34584aa. Jun 25 16:22:27.521000 audit: BPF prog-id=74 op=LOAD Jun 25 16:22:27.521000 audit: BPF prog-id=75 op=LOAD Jun 25 16:22:27.521000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2152 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461656635353263373431323035323735316330363933646335653431 Jun 25 16:22:27.522000 audit: BPF prog-id=76 op=LOAD Jun 25 16:22:27.522000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2152 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461656635353263373431323035323735316330363933646335653431 Jun 25 16:22:27.522000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:22:27.522000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:22:27.522000 audit: BPF prog-id=77 op=LOAD Jun 25 16:22:27.522000 audit[2293]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2152 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461656635353263373431323035323735316330363933646335653431 Jun 25 16:22:27.524000 audit: BPF prog-id=78 op=LOAD Jun 25 16:22:27.524000 audit: BPF prog-id=79 op=LOAD Jun 25 16:22:27.524000 audit[2292]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2154 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630643335396433623762386637663933663136333366326230333962 Jun 25 16:22:27.525000 audit: BPF prog-id=80 op=LOAD Jun 25 16:22:27.525000 audit[2292]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2154 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630643335396433623762386637663933663136333366326230333962 Jun 25 16:22:27.525000 audit: BPF prog-id=80 op=UNLOAD Jun 25 16:22:27.525000 audit: BPF prog-id=79 op=UNLOAD Jun 25 16:22:27.525000 audit: BPF prog-id=81 op=LOAD Jun 25 16:22:27.525000 audit[2292]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2154 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:27.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630643335396433623762386637663933663136333366326230333962 Jun 25 16:22:27.549242 containerd[1339]: time="2024-06-25T16:22:27.549219463Z" level=info msg="StartContainer for \"4aef552c7412052751c0693dc5e41184009a1e9ac476d2abd58784f3306c5f90\" returns successfully" Jun 25 16:22:27.549407 containerd[1339]: time="2024-06-25T16:22:27.549394528Z" level=info msg="StartContainer for \"60d359d3b7b8f7f93f1633f2b039b16adb17b1ddf9c2864058a36113c34584aa\" returns successfully" Jun 25 16:22:27.564851 containerd[1339]: time="2024-06-25T16:22:27.564817406Z" level=info msg="StartContainer for \"21a07c2560c5c2f12b0e1c122510d2359f0794e33f373fc92e80a235b10c28ab\" returns successfully" Jun 25 16:22:27.612824 kubelet[2098]: I0625 16:22:27.612447 2098 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:27.612824 kubelet[2098]: E0625 16:22:27.612802 2098 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jun 25 16:22:28.123806 kubelet[2098]: E0625 16:22:28.123787 2098 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.109:6443: connect: connection refused Jun 25 16:22:28.200000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.200000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000ae4480 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:22:28.200000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:22:28.201000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.201000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000515c60 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:22:28.201000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:22:28.886000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.887670 kernel: kauditd_printk_skb: 140 callbacks suppressed Jun 25 16:22:28.887738 kernel: audit: type=1400 audit(1719332548.886:345): avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.886000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c006144e70 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.889599 kernel: audit: type=1300 audit(1719332548.886:345): arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c006144e70 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.886000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.893647 kernel: audit: type=1327 audit(1719332548.886:345): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.893674 kernel: audit: type=1400 audit(1719332548.887:346): avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=1041980 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.887000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=1041980 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.896183 kernel: audit: type=1300 audit(1719332548.887:346): arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c00628e090 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.887000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c00628e090 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.887000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.900009 kernel: audit: type=1327 audit(1719332548.887:346): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.900038 kernel: audit: type=1400 audit(1719332548.889:347): avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=1041986 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.889000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=1041986 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.901860 kernel: audit: type=1300 audit(1719332548.889:347): arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c00628e240 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.889000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c00628e240 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.904068 kernel: audit: type=1327 audit(1719332548.889:347): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.889000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.904000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.904000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=47 a1=c006f96220 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.904000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.909496 kernel: audit: type=1400 audit(1719332548.904:348): avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.909000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.909000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c00638ef60 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.909000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:28.909000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:28.909000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c007504ea0 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:22:28.909000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:22:29.094235 kubelet[2098]: I0625 16:22:29.094182 2098 apiserver.go:52] "Watching apiserver" Jun 25 16:22:29.107607 kubelet[2098]: I0625 16:22:29.107588 2098 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 16:22:29.112186 kubelet[2098]: E0625 16:22:29.112174 2098 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 16:22:29.214546 kubelet[2098]: I0625 16:22:29.214520 2098 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:29.219854 kubelet[2098]: I0625 16:22:29.219834 2098 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 16:22:30.766400 systemd[1]: Reloading. Jun 25 16:22:30.878470 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:22:30.890326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:22:30.940000 audit: BPF prog-id=82 op=LOAD Jun 25 16:22:30.940000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:22:30.941000 audit: BPF prog-id=83 op=LOAD Jun 25 16:22:30.941000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:22:30.941000 audit: BPF prog-id=84 op=LOAD Jun 25 16:22:30.941000 audit: BPF prog-id=78 op=UNLOAD Jun 25 16:22:30.942000 audit: BPF prog-id=85 op=LOAD Jun 25 16:22:30.942000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:22:30.942000 audit: BPF prog-id=86 op=LOAD Jun 25 16:22:30.942000 audit: BPF prog-id=87 op=LOAD Jun 25 16:22:30.942000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:22:30.942000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:22:30.942000 audit: BPF prog-id=88 op=LOAD Jun 25 16:22:30.942000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:22:30.942000 audit: BPF prog-id=89 op=LOAD Jun 25 16:22:30.942000 audit: BPF prog-id=90 op=LOAD Jun 25 16:22:30.942000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:22:30.942000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:22:30.942000 audit: BPF prog-id=91 op=LOAD Jun 25 16:22:30.942000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:22:30.943000 audit: BPF prog-id=92 op=LOAD Jun 25 16:22:30.943000 audit: BPF prog-id=93 op=LOAD Jun 25 16:22:30.943000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:22:30.943000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:22:30.943000 audit: BPF prog-id=94 op=LOAD Jun 25 16:22:30.943000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:22:30.944000 audit: BPF prog-id=95 op=LOAD Jun 25 16:22:30.944000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:22:30.945000 audit: BPF prog-id=96 op=LOAD Jun 25 16:22:30.945000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:22:30.945000 audit: BPF prog-id=97 op=LOAD Jun 25 16:22:30.945000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:22:30.945000 audit: BPF prog-id=98 op=LOAD Jun 25 16:22:30.946000 audit: BPF prog-id=99 op=LOAD Jun 25 16:22:30.946000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:22:30.946000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:22:30.947000 audit: BPF prog-id=100 op=LOAD Jun 25 16:22:30.947000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:22:30.948000 audit: BPF prog-id=101 op=LOAD Jun 25 16:22:30.948000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:22:30.955914 kubelet[2098]: I0625 16:22:30.955832 2098 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:22:30.956081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:30.974741 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:22:30.974911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:30.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:30.980971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:31.355518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:31.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:31.428639 kubelet[2453]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:22:31.428928 kubelet[2453]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:22:31.428959 kubelet[2453]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:22:31.430851 kubelet[2453]: I0625 16:22:31.430831 2453 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:22:31.433474 kubelet[2453]: I0625 16:22:31.433458 2453 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:22:31.433474 kubelet[2453]: I0625 16:22:31.433472 2453 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:22:31.433599 kubelet[2453]: I0625 16:22:31.433587 2453 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:22:31.434344 kubelet[2453]: I0625 16:22:31.434333 2453 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:22:31.436758 kubelet[2453]: I0625 16:22:31.436735 2453 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:22:31.441060 kubelet[2453]: I0625 16:22:31.441045 2453 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:22:31.441159 kubelet[2453]: I0625 16:22:31.441140 2453 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:22:31.441271 kubelet[2453]: I0625 16:22:31.441159 2453 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:22:31.441335 kubelet[2453]: I0625 16:22:31.441280 2453 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:22:31.441335 kubelet[2453]: I0625 16:22:31.441287 2453 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:22:31.441928 kubelet[2453]: I0625 16:22:31.441917 2453 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:22:31.441984 kubelet[2453]: I0625 16:22:31.441974 2453 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:22:31.442008 kubelet[2453]: I0625 16:22:31.441985 2453 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:22:31.442008 kubelet[2453]: I0625 16:22:31.442000 2453 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:22:31.442044 kubelet[2453]: I0625 16:22:31.442009 2453 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:22:31.444566 kubelet[2453]: I0625 16:22:31.444556 2453 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:22:31.444733 kubelet[2453]: I0625 16:22:31.444725 2453 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:22:31.448213 kubelet[2453]: I0625 16:22:31.448206 2453 server.go:1264] "Started kubelet" Jun 25 16:22:31.451083 kubelet[2453]: I0625 16:22:31.448605 2453 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:22:31.451083 kubelet[2453]: I0625 16:22:31.449311 2453 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:22:31.451918 kubelet[2453]: I0625 16:22:31.451885 2453 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:22:31.452061 kubelet[2453]: I0625 16:22:31.452054 2453 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:22:31.452384 kubelet[2453]: I0625 16:22:31.452373 2453 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:22:31.457118 kubelet[2453]: I0625 16:22:31.457101 2453 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:22:31.461129 kubelet[2453]: E0625 16:22:31.461117 2453 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:22:31.461585 kubelet[2453]: I0625 16:22:31.461560 2453 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:22:31.461783 kubelet[2453]: I0625 16:22:31.461774 2453 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:22:31.462406 kubelet[2453]: I0625 16:22:31.462398 2453 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:22:31.462668 kubelet[2453]: I0625 16:22:31.462588 2453 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:22:31.465434 kubelet[2453]: I0625 16:22:31.465412 2453 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:22:31.471028 kubelet[2453]: I0625 16:22:31.471003 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:22:31.471609 kubelet[2453]: I0625 16:22:31.471595 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:22:31.471643 kubelet[2453]: I0625 16:22:31.471615 2453 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:22:31.471643 kubelet[2453]: I0625 16:22:31.471626 2453 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:22:31.471702 kubelet[2453]: E0625 16:22:31.471647 2453 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:22:31.504256 kubelet[2453]: I0625 16:22:31.504243 2453 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:22:31.504355 kubelet[2453]: I0625 16:22:31.504348 2453 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:22:31.504394 kubelet[2453]: I0625 16:22:31.504389 2453 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:22:31.504527 kubelet[2453]: I0625 16:22:31.504519 2453 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:22:31.504596 kubelet[2453]: I0625 16:22:31.504583 2453 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:22:31.504631 kubelet[2453]: I0625 16:22:31.504627 2453 policy_none.go:49] "None policy: Start" Jun 25 16:22:31.504962 kubelet[2453]: I0625 16:22:31.504949 2453 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:22:31.504996 kubelet[2453]: I0625 16:22:31.504964 2453 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:22:31.505044 kubelet[2453]: I0625 16:22:31.505034 2453 state_mem.go:75] "Updated machine memory state" Jun 25 16:22:31.507384 kubelet[2453]: I0625 16:22:31.507370 2453 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:22:31.507503 kubelet[2453]: I0625 16:22:31.507469 2453 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:22:31.507549 kubelet[2453]: I0625 16:22:31.507539 2453 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:22:31.560521 kubelet[2453]: I0625 16:22:31.560503 2453 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:31.566121 kubelet[2453]: I0625 16:22:31.566100 2453 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 16:22:31.566218 kubelet[2453]: I0625 16:22:31.566164 2453 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 16:22:31.572029 kubelet[2453]: I0625 16:22:31.572003 2453 topology_manager.go:215] "Topology Admit Handler" podUID="77d4f879911fdc4d972ad329af62281f" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:22:31.572167 kubelet[2453]: I0625 16:22:31.572157 2453 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:22:31.572261 kubelet[2453]: I0625 16:22:31.572253 2453 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:22:31.575753 kubelet[2453]: E0625 16:22:31.575730 2453 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 25 16:22:31.575871 kubelet[2453]: E0625 16:22:31.575859 2453 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:31.663185 kubelet[2453]: I0625 16:22:31.663164 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77d4f879911fdc4d972ad329af62281f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77d4f879911fdc4d972ad329af62281f\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:31.763644 kubelet[2453]: I0625 16:22:31.763618 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:31.763735 kubelet[2453]: I0625 16:22:31.763658 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77d4f879911fdc4d972ad329af62281f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77d4f879911fdc4d972ad329af62281f\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:31.763735 kubelet[2453]: I0625 16:22:31.763671 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:31.763735 kubelet[2453]: I0625 16:22:31.763682 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:31.763735 kubelet[2453]: I0625 16:22:31.763695 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:31.763844 kubelet[2453]: I0625 16:22:31.763712 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:31.763844 kubelet[2453]: I0625 16:22:31.763775 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:22:31.763844 kubelet[2453]: I0625 16:22:31.763816 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77d4f879911fdc4d972ad329af62281f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77d4f879911fdc4d972ad329af62281f\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:32.005000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=521020 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:22:32.005000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000c9a040 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:22:32.005000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:22:32.444718 kubelet[2453]: I0625 16:22:32.444682 2453 apiserver.go:52] "Watching apiserver" Jun 25 16:22:32.523625 kubelet[2453]: E0625 16:22:32.523600 2453 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:32.535929 kubelet[2453]: E0625 16:22:32.535905 2453 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:32.562150 kubelet[2453]: I0625 16:22:32.562127 2453 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 16:22:32.584616 kubelet[2453]: I0625 16:22:32.584474 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.584461854 podStartE2EDuration="1.584461854s" podCreationTimestamp="2024-06-25 16:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:22:32.569710751 +0000 UTC m=+1.210751973" watchObservedRunningTime="2024-06-25 16:22:32.584461854 +0000 UTC m=+1.225503077" Jun 25 16:22:32.589344 kubelet[2453]: I0625 16:22:32.589310 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.5892980100000003 podStartE2EDuration="2.58929801s" podCreationTimestamp="2024-06-25 16:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:22:32.58468196 +0000 UTC m=+1.225723177" watchObservedRunningTime="2024-06-25 16:22:32.58929801 +0000 UTC m=+1.230339233" Jun 25 16:22:32.597554 kubelet[2453]: I0625 16:22:32.597521 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.597510154 podStartE2EDuration="2.597510154s" podCreationTimestamp="2024-06-25 16:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:22:32.589443988 +0000 UTC m=+1.230485211" watchObservedRunningTime="2024-06-25 16:22:32.597510154 +0000 UTC m=+1.238551376" Jun 25 16:22:33.768000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:33.768000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e4a2e0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:22:33.768000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:22:33.769000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:33.769000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e4a4a0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:22:33.769000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:22:33.769000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:33.769000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e4a660 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:22:33.769000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:22:33.770000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:22:33.770000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e4a820 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:22:33.770000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:22:35.505704 sudo[1586]: pam_unix(sudo:session): session closed for user root Jun 25 16:22:35.507357 kernel: kauditd_printk_skb: 65 callbacks suppressed Jun 25 16:22:35.507402 kernel: audit: type=1106 audit(1719332555.505:398): pid=1586 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.505000 audit[1586]: USER_END pid=1586 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.505000 audit[1586]: CRED_DISP pid=1586 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.509946 kernel: audit: type=1104 audit(1719332555.505:399): pid=1586 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.510105 sshd[1583]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:35.510000 audit[1583]: USER_END pid=1583 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:35.513405 systemd[1]: sshd@6-139.178.70.109:22-139.178.68.195:46840.service: Deactivated successfully. Jun 25 16:22:35.513594 kernel: audit: type=1106 audit(1719332555.510:400): pid=1583 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:35.510000 audit[1583]: CRED_DISP pid=1583 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:35.513872 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:22:35.513972 systemd[1]: session-9.scope: Consumed 3.505s CPU time. Jun 25 16:22:35.514536 systemd-logind[1327]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:22:35.515044 systemd-logind[1327]: Removed session 9. Jun 25 16:22:35.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.109:22-139.178.68.195:46840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.517402 kernel: audit: type=1104 audit(1719332555.510:401): pid=1583 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:22:35.517440 kernel: audit: type=1131 audit(1719332555.513:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.109:22-139.178.68.195:46840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:45.289336 kubelet[2453]: I0625 16:22:45.289317 2453 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:22:45.290049 containerd[1339]: time="2024-06-25T16:22:45.289991109Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:22:45.290274 kubelet[2453]: I0625 16:22:45.290263 2453 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:22:46.080473 kubelet[2453]: I0625 16:22:46.080441 2453 topology_manager.go:215] "Topology Admit Handler" podUID="23760d65-ec8d-43df-893d-1b96955ebc83" podNamespace="kube-system" podName="kube-proxy-wdgzc" Jun 25 16:22:46.084047 systemd[1]: Created slice kubepods-besteffort-pod23760d65_ec8d_43df_893d_1b96955ebc83.slice - libcontainer container kubepods-besteffort-pod23760d65_ec8d_43df_893d_1b96955ebc83.slice. Jun 25 16:22:46.144910 kubelet[2453]: I0625 16:22:46.144882 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23760d65-ec8d-43df-893d-1b96955ebc83-kube-proxy\") pod \"kube-proxy-wdgzc\" (UID: \"23760d65-ec8d-43df-893d-1b96955ebc83\") " pod="kube-system/kube-proxy-wdgzc" Jun 25 16:22:46.144910 kubelet[2453]: I0625 16:22:46.144912 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23760d65-ec8d-43df-893d-1b96955ebc83-xtables-lock\") pod \"kube-proxy-wdgzc\" (UID: \"23760d65-ec8d-43df-893d-1b96955ebc83\") " pod="kube-system/kube-proxy-wdgzc" Jun 25 16:22:46.145041 kubelet[2453]: I0625 16:22:46.144925 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdxgb\" (UniqueName: \"kubernetes.io/projected/23760d65-ec8d-43df-893d-1b96955ebc83-kube-api-access-xdxgb\") pod \"kube-proxy-wdgzc\" (UID: \"23760d65-ec8d-43df-893d-1b96955ebc83\") " pod="kube-system/kube-proxy-wdgzc" Jun 25 16:22:46.145041 kubelet[2453]: I0625 16:22:46.144936 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23760d65-ec8d-43df-893d-1b96955ebc83-lib-modules\") pod \"kube-proxy-wdgzc\" (UID: \"23760d65-ec8d-43df-893d-1b96955ebc83\") " pod="kube-system/kube-proxy-wdgzc" Jun 25 16:22:46.289754 kubelet[2453]: I0625 16:22:46.289721 2453 topology_manager.go:215] "Topology Admit Handler" podUID="54127cee-8bbf-4191-96ca-6b52f3099c68" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-46h68" Jun 25 16:22:46.291972 kubelet[2453]: W0625 16:22:46.291957 2453 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jun 25 16:22:46.292080 kubelet[2453]: E0625 16:22:46.292073 2453 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jun 25 16:22:46.292129 kubelet[2453]: W0625 16:22:46.292001 2453 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jun 25 16:22:46.292166 kubelet[2453]: E0625 16:22:46.292161 2453 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jun 25 16:22:46.293439 systemd[1]: Created slice kubepods-besteffort-pod54127cee_8bbf_4191_96ca_6b52f3099c68.slice - libcontainer container kubepods-besteffort-pod54127cee_8bbf_4191_96ca_6b52f3099c68.slice. Jun 25 16:22:46.346526 kubelet[2453]: I0625 16:22:46.346422 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54127cee-8bbf-4191-96ca-6b52f3099c68-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-46h68\" (UID: \"54127cee-8bbf-4191-96ca-6b52f3099c68\") " pod="tigera-operator/tigera-operator-76ff79f7fd-46h68" Jun 25 16:22:46.346526 kubelet[2453]: I0625 16:22:46.346463 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9v9\" (UniqueName: \"kubernetes.io/projected/54127cee-8bbf-4191-96ca-6b52f3099c68-kube-api-access-kz9v9\") pod \"tigera-operator-76ff79f7fd-46h68\" (UID: \"54127cee-8bbf-4191-96ca-6b52f3099c68\") " pod="tigera-operator/tigera-operator-76ff79f7fd-46h68" Jun 25 16:22:46.392121 containerd[1339]: time="2024-06-25T16:22:46.392093637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdgzc,Uid:23760d65-ec8d-43df-893d-1b96955ebc83,Namespace:kube-system,Attempt:0,}" Jun 25 16:22:46.408233 containerd[1339]: time="2024-06-25T16:22:46.408111554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:46.408233 containerd[1339]: time="2024-06-25T16:22:46.408143441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:46.408233 containerd[1339]: time="2024-06-25T16:22:46.408152441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:46.408233 containerd[1339]: time="2024-06-25T16:22:46.408158233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:46.424594 systemd[1]: Started cri-containerd-f3cc4e3bc3ec4e0f1bb1713b301df1ff97e7cf64077668c52fa29dec252b5f21.scope - libcontainer container f3cc4e3bc3ec4e0f1bb1713b301df1ff97e7cf64077668c52fa29dec252b5f21. Jun 25 16:22:46.432885 kernel: audit: type=1334 audit(1719332566.429:403): prog-id=102 op=LOAD Jun 25 16:22:46.432947 kernel: audit: type=1334 audit(1719332566.430:404): prog-id=103 op=LOAD Jun 25 16:22:46.432964 kernel: audit: type=1300 audit(1719332566.430:404): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2538 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.429000 audit: BPF prog-id=102 op=LOAD Jun 25 16:22:46.430000 audit: BPF prog-id=103 op=LOAD Jun 25 16:22:46.430000 audit[2548]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2538 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633636334653362633365633465306631626231373133623330316466 Jun 25 16:22:46.435841 kernel: audit: type=1327 audit(1719332566.430:404): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633636334653362633365633465306631626231373133623330316466 Jun 25 16:22:46.435885 kernel: audit: type=1334 audit(1719332566.430:405): prog-id=104 op=LOAD Jun 25 16:22:46.430000 audit: BPF prog-id=104 op=LOAD Jun 25 16:22:46.430000 audit[2548]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2538 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.436668 kernel: audit: type=1300 audit(1719332566.430:405): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2538 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.438694 kernel: audit: type=1327 audit(1719332566.430:405): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633636334653362633365633465306631626231373133623330316466 Jun 25 16:22:46.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633636334653362633365633465306631626231373133623330316466 Jun 25 16:22:46.430000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:22:46.430000 audit: BPF prog-id=103 op=UNLOAD Jun 25 16:22:46.441777 kernel: audit: type=1334 audit(1719332566.430:406): prog-id=104 op=UNLOAD Jun 25 16:22:46.441815 kernel: audit: type=1334 audit(1719332566.430:407): prog-id=103 op=UNLOAD Jun 25 16:22:46.441840 kernel: audit: type=1334 audit(1719332566.430:408): prog-id=105 op=LOAD Jun 25 16:22:46.430000 audit: BPF prog-id=105 op=LOAD Jun 25 16:22:46.430000 audit[2548]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2538 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633636334653362633365633465306631626231373133623330316466 Jun 25 16:22:46.450183 containerd[1339]: time="2024-06-25T16:22:46.449857513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdgzc,Uid:23760d65-ec8d-43df-893d-1b96955ebc83,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3cc4e3bc3ec4e0f1bb1713b301df1ff97e7cf64077668c52fa29dec252b5f21\"" Jun 25 16:22:46.452503 containerd[1339]: time="2024-06-25T16:22:46.452466634Z" level=info msg="CreateContainer within sandbox \"f3cc4e3bc3ec4e0f1bb1713b301df1ff97e7cf64077668c52fa29dec252b5f21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:22:46.457899 containerd[1339]: time="2024-06-25T16:22:46.457872674Z" level=info msg="CreateContainer within sandbox \"f3cc4e3bc3ec4e0f1bb1713b301df1ff97e7cf64077668c52fa29dec252b5f21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26f344c31877e98a0e816cef5ba09f221734d274f8b9f5b26227404e5841f81f\"" Jun 25 16:22:46.458914 containerd[1339]: time="2024-06-25T16:22:46.458300590Z" level=info msg="StartContainer for \"26f344c31877e98a0e816cef5ba09f221734d274f8b9f5b26227404e5841f81f\"" Jun 25 16:22:46.474578 systemd[1]: Started cri-containerd-26f344c31877e98a0e816cef5ba09f221734d274f8b9f5b26227404e5841f81f.scope - libcontainer container 26f344c31877e98a0e816cef5ba09f221734d274f8b9f5b26227404e5841f81f. Jun 25 16:22:46.481000 audit: BPF prog-id=106 op=LOAD Jun 25 16:22:46.481000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2538 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663334346333313837376539386130653831366365663562613039 Jun 25 16:22:46.482000 audit: BPF prog-id=107 op=LOAD Jun 25 16:22:46.482000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2538 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663334346333313837376539386130653831366365663562613039 Jun 25 16:22:46.482000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:22:46.482000 audit: BPF prog-id=106 op=UNLOAD Jun 25 16:22:46.482000 audit: BPF prog-id=108 op=LOAD Jun 25 16:22:46.482000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2538 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:46.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663334346333313837376539386130653831366365663562613039 Jun 25 16:22:46.490231 containerd[1339]: time="2024-06-25T16:22:46.490205667Z" level=info msg="StartContainer for \"26f344c31877e98a0e816cef5ba09f221734d274f8b9f5b26227404e5841f81f\" returns successfully" Jun 25 16:22:47.000000 audit[2630]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.000000 audit[2630]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc2c17fe0 a2=0 a3=7ffcc2c17fcc items=0 ppid=2589 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:22:47.001000 audit[2631]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.001000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda443bf60 a2=0 a3=7ffda443bf4c items=0 ppid=2589 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:22:47.002000 audit[2632]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.002000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5e8feca0 a2=0 a3=7ffd5e8fec8c items=0 ppid=2589 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:22:47.003000 audit[2633]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.003000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc12e4bbe0 a2=0 a3=7ffc12e4bbcc items=0 ppid=2589 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:22:47.004000 audit[2634]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.004000 audit[2634]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe26b1bb60 a2=0 a3=7ffe26b1bb4c items=0 ppid=2589 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:22:47.005000 audit[2635]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.005000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd254e54a0 a2=0 a3=7ffd254e548c items=0 ppid=2589 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:22:47.107000 audit[2636]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.107000 audit[2636]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe4574e290 a2=0 a3=7ffe4574e27c items=0 ppid=2589 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:22:47.118000 audit[2638]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.118000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce62d5f60 a2=0 a3=7ffce62d5f4c items=0 ppid=2589 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:22:47.125000 audit[2641]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.125000 audit[2641]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff96d909c0 a2=0 a3=7fff96d909ac items=0 ppid=2589 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:22:47.126000 audit[2642]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.126000 audit[2642]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc184bda30 a2=0 a3=7ffc184bda1c items=0 ppid=2589 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.126000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:22:47.129000 audit[2644]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.129000 audit[2644]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7bf6bec0 a2=0 a3=7ffe7bf6beac items=0 ppid=2589 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:22:47.130000 audit[2645]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.130000 audit[2645]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcf7cb770 a2=0 a3=7fffcf7cb75c items=0 ppid=2589 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:22:47.133000 audit[2647]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.133000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff60a860e0 a2=0 a3=7fff60a860cc items=0 ppid=2589 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:22:47.136000 audit[2650]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.136000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0594f8c0 a2=0 a3=7ffd0594f8ac items=0 ppid=2589 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:22:47.137000 audit[2651]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.137000 audit[2651]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc928ffd90 a2=0 a3=7ffc928ffd7c items=0 ppid=2589 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.137000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:22:47.139000 audit[2653]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.139000 audit[2653]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7258bd50 a2=0 a3=7ffd7258bd3c items=0 ppid=2589 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:22:47.141000 audit[2654]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.141000 audit[2654]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2db6c310 a2=0 a3=7fff2db6c2fc items=0 ppid=2589 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:22:47.143000 audit[2656]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.143000 audit[2656]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff84836970 a2=0 a3=7fff8483695c items=0 ppid=2589 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:22:47.146000 audit[2659]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.146000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdad4094f0 a2=0 a3=7ffdad4094dc items=0 ppid=2589 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:22:47.149000 audit[2662]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.149000 audit[2662]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9e98a1b0 a2=0 a3=7ffc9e98a19c items=0 ppid=2589 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:22:47.150000 audit[2663]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.150000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf9035c50 a2=0 a3=7ffcf9035c3c items=0 ppid=2589 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:22:47.153000 audit[2665]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.153000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe5624f9f0 a2=0 a3=7ffe5624f9dc items=0 ppid=2589 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:22:47.156000 audit[2668]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.156000 audit[2668]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2b9d0e60 a2=0 a3=7ffd2b9d0e4c items=0 ppid=2589 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:22:47.157000 audit[2669]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.157000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2c64b540 a2=0 a3=7ffc2c64b52c items=0 ppid=2589 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:22:47.159000 audit[2671]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:47.159000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcdde811d0 a2=0 a3=7ffcdde811bc items=0 ppid=2589 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:22:47.176000 audit[2677]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:47.176000 audit[2677]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffde4b30d90 a2=0 a3=7ffde4b30d7c items=0 ppid=2589 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:47.181000 audit[2677]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:47.181000 audit[2677]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffde4b30d90 a2=0 a3=7ffde4b30d7c items=0 ppid=2589 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:47.183000 audit[2684]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.183000 audit[2684]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffebec92c60 a2=0 a3=7ffebec92c4c items=0 ppid=2589 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:22:47.184000 audit[2686]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.184000 audit[2686]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffecd9801d0 a2=0 a3=7ffecd9801bc items=0 ppid=2589 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:22:47.187000 audit[2689]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.187000 audit[2689]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc98378ba0 a2=0 a3=7ffc98378b8c items=0 ppid=2589 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:22:47.188000 audit[2690]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2690 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.188000 audit[2690]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc6c48550 a2=0 a3=7ffdc6c4853c items=0 ppid=2589 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:22:47.189000 audit[2692]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.189000 audit[2692]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd3cd3d30 a2=0 a3=7fffd3cd3d1c items=0 ppid=2589 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:22:47.190000 audit[2693]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.190000 audit[2693]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1dcb8180 a2=0 a3=7ffe1dcb816c items=0 ppid=2589 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:22:47.192000 audit[2695]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.192000 audit[2695]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff67d0af70 a2=0 a3=7fff67d0af5c items=0 ppid=2589 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:22:47.194000 audit[2698]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.194000 audit[2698]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd8c136a10 a2=0 a3=7ffd8c1369fc items=0 ppid=2589 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:22:47.195000 audit[2699]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.195000 audit[2699]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecfe154d0 a2=0 a3=7ffecfe154bc items=0 ppid=2589 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:22:47.196000 audit[2701]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2701 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.196000 audit[2701]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff19c23060 a2=0 a3=7fff19c2304c items=0 ppid=2589 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:22:47.197000 audit[2702]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.197000 audit[2702]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfb6c0e30 a2=0 a3=7ffdfb6c0e1c items=0 ppid=2589 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:22:47.199000 audit[2704]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.199000 audit[2704]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6bb4e420 a2=0 a3=7ffc6bb4e40c items=0 ppid=2589 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:22:47.201000 audit[2707]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.201000 audit[2707]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe55d24450 a2=0 a3=7ffe55d2443c items=0 ppid=2589 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:22:47.204000 audit[2710]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2710 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.204000 audit[2710]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe68beec00 a2=0 a3=7ffe68beebec items=0 ppid=2589 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:22:47.205000 audit[2711]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.205000 audit[2711]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcfdc23a30 a2=0 a3=7ffcfdc23a1c items=0 ppid=2589 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:22:47.206000 audit[2713]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.206000 audit[2713]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc120d2ba0 a2=0 a3=7ffc120d2b8c items=0 ppid=2589 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:22:47.209000 audit[2716]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.209000 audit[2716]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffdffc26ce0 a2=0 a3=7ffdffc26ccc items=0 ppid=2589 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:22:47.209000 audit[2717]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.209000 audit[2717]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdaab85220 a2=0 a3=7ffdaab8520c items=0 ppid=2589 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:22:47.211000 audit[2719]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.211000 audit[2719]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff4c58d250 a2=0 a3=7fff4c58d23c items=0 ppid=2589 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:22:47.212000 audit[2720]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.212000 audit[2720]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff82da910 a2=0 a3=7ffff82da8fc items=0 ppid=2589 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:22:47.213000 audit[2722]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.213000 audit[2722]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffe8aa95b0 a2=0 a3=7fffe8aa959c items=0 ppid=2589 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.213000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:22:47.215000 audit[2725]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:47.215000 audit[2725]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe363e1880 a2=0 a3=7ffe363e186c items=0 ppid=2589 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.215000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:22:47.217000 audit[2727]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:22:47.217000 audit[2727]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffe92ece460 a2=0 a3=7ffe92ece44c items=0 ppid=2589 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.217000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:47.218000 audit[2727]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:22:47.218000 audit[2727]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe92ece460 a2=0 a3=7ffe92ece44c items=0 ppid=2589 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:47.218000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:47.253669 systemd[1]: run-containerd-runc-k8s.io-f3cc4e3bc3ec4e0f1bb1713b301df1ff97e7cf64077668c52fa29dec252b5f21-runc.qKK1vN.mount: Deactivated successfully. Jun 25 16:22:47.453257 kubelet[2453]: E0625 16:22:47.453228 2453 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:22:47.453257 kubelet[2453]: E0625 16:22:47.453260 2453 projected.go:200] Error preparing data for projected volume kube-api-access-kz9v9 for pod tigera-operator/tigera-operator-76ff79f7fd-46h68: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:22:47.464771 kubelet[2453]: E0625 16:22:47.464738 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54127cee-8bbf-4191-96ca-6b52f3099c68-kube-api-access-kz9v9 podName:54127cee-8bbf-4191-96ca-6b52f3099c68 nodeName:}" failed. No retries permitted until 2024-06-25 16:22:47.959827471 +0000 UTC m=+16.600868691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kz9v9" (UniqueName: "kubernetes.io/projected/54127cee-8bbf-4191-96ca-6b52f3099c68-kube-api-access-kz9v9") pod "tigera-operator-76ff79f7fd-46h68" (UID: "54127cee-8bbf-4191-96ca-6b52f3099c68") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:22:48.095434 containerd[1339]: time="2024-06-25T16:22:48.095405435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-46h68,Uid:54127cee-8bbf-4191-96ca-6b52f3099c68,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:22:48.138111 containerd[1339]: time="2024-06-25T16:22:48.138036090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:48.138251 containerd[1339]: time="2024-06-25T16:22:48.138236474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:48.138324 containerd[1339]: time="2024-06-25T16:22:48.138309290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:48.138385 containerd[1339]: time="2024-06-25T16:22:48.138374230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:48.155633 systemd[1]: Started cri-containerd-0b1f5f676cdde387b85657a794071bbd63986bd2da35fe1920e490cb527d0dd7.scope - libcontainer container 0b1f5f676cdde387b85657a794071bbd63986bd2da35fe1920e490cb527d0dd7. Jun 25 16:22:48.163000 audit: BPF prog-id=109 op=LOAD Jun 25 16:22:48.164000 audit: BPF prog-id=110 op=LOAD Jun 25 16:22:48.164000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2737 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:48.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316635663637366364646533383762383536353761373934303731 Jun 25 16:22:48.164000 audit: BPF prog-id=111 op=LOAD Jun 25 16:22:48.164000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2737 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:48.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316635663637366364646533383762383536353761373934303731 Jun 25 16:22:48.164000 audit: BPF prog-id=111 op=UNLOAD Jun 25 16:22:48.164000 audit: BPF prog-id=110 op=UNLOAD Jun 25 16:22:48.164000 audit: BPF prog-id=112 op=LOAD Jun 25 16:22:48.164000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2737 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:48.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316635663637366364646533383762383536353761373934303731 Jun 25 16:22:48.189200 containerd[1339]: time="2024-06-25T16:22:48.189164125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-46h68,Uid:54127cee-8bbf-4191-96ca-6b52f3099c68,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b1f5f676cdde387b85657a794071bbd63986bd2da35fe1920e490cb527d0dd7\"" Jun 25 16:22:48.190575 containerd[1339]: time="2024-06-25T16:22:48.190451053Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:22:49.522102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856438026.mount: Deactivated successfully. Jun 25 16:22:49.887055 containerd[1339]: time="2024-06-25T16:22:49.886981577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:49.889295 containerd[1339]: time="2024-06-25T16:22:49.889267915Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076064" Jun 25 16:22:49.891559 containerd[1339]: time="2024-06-25T16:22:49.891542900Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:49.895688 containerd[1339]: time="2024-06-25T16:22:49.895674918Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:49.897887 containerd[1339]: time="2024-06-25T16:22:49.897875163Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:49.898185 containerd[1339]: time="2024-06-25T16:22:49.898166781Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.707691548s" Jun 25 16:22:49.898218 containerd[1339]: time="2024-06-25T16:22:49.898186904Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:22:49.899893 containerd[1339]: time="2024-06-25T16:22:49.899873734Z" level=info msg="CreateContainer within sandbox \"0b1f5f676cdde387b85657a794071bbd63986bd2da35fe1920e490cb527d0dd7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:22:49.919178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446022967.mount: Deactivated successfully. Jun 25 16:22:49.922876 containerd[1339]: time="2024-06-25T16:22:49.922856315Z" level=info msg="CreateContainer within sandbox \"0b1f5f676cdde387b85657a794071bbd63986bd2da35fe1920e490cb527d0dd7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"41e72ed2581a02a55d624f636367bc9a34105672721c80bfbcd70d6fda12048a\"" Jun 25 16:22:49.923414 containerd[1339]: time="2024-06-25T16:22:49.923395080Z" level=info msg="StartContainer for \"41e72ed2581a02a55d624f636367bc9a34105672721c80bfbcd70d6fda12048a\"" Jun 25 16:22:49.941596 systemd[1]: Started cri-containerd-41e72ed2581a02a55d624f636367bc9a34105672721c80bfbcd70d6fda12048a.scope - libcontainer container 41e72ed2581a02a55d624f636367bc9a34105672721c80bfbcd70d6fda12048a. Jun 25 16:22:49.948000 audit: BPF prog-id=113 op=LOAD Jun 25 16:22:49.948000 audit: BPF prog-id=114 op=LOAD Jun 25 16:22:49.948000 audit[2786]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2737 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:49.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653732656432353831613032613535643632346636333633363762 Jun 25 16:22:49.948000 audit: BPF prog-id=115 op=LOAD Jun 25 16:22:49.948000 audit[2786]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2737 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:49.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653732656432353831613032613535643632346636333633363762 Jun 25 16:22:49.948000 audit: BPF prog-id=115 op=UNLOAD Jun 25 16:22:49.948000 audit: BPF prog-id=114 op=UNLOAD Jun 25 16:22:49.949000 audit: BPF prog-id=116 op=LOAD Jun 25 16:22:49.949000 audit[2786]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2737 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:49.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653732656432353831613032613535643632346636333633363762 Jun 25 16:22:49.958818 containerd[1339]: time="2024-06-25T16:22:49.957457903Z" level=info msg="StartContainer for \"41e72ed2581a02a55d624f636367bc9a34105672721c80bfbcd70d6fda12048a\" returns successfully" Jun 25 16:22:50.492520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1286404853.mount: Deactivated successfully. Jun 25 16:22:50.525281 kubelet[2453]: I0625 16:22:50.525249 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wdgzc" podStartSLOduration=4.525237757 podStartE2EDuration="4.525237757s" podCreationTimestamp="2024-06-25 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:22:46.512394503 +0000 UTC m=+15.153435730" watchObservedRunningTime="2024-06-25 16:22:50.525237757 +0000 UTC m=+19.166278977" Jun 25 16:22:52.579000 audit[2817]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.581322 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 16:22:52.581365 kernel: audit: type=1325 audit(1719332572.579:477): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.581384 kernel: audit: type=1300 audit(1719332572.579:477): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdd0ef28d0 a2=0 a3=7ffdd0ef28bc items=0 ppid=2589 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.579000 audit[2817]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdd0ef28d0 a2=0 a3=7ffdd0ef28bc items=0 ppid=2589 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:52.584777 kernel: audit: type=1327 audit(1719332572.579:477): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:52.584000 audit[2817]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.586253 kernel: audit: type=1325 audit(1719332572.584:478): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.586279 kernel: audit: type=1300 audit(1719332572.584:478): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd0ef28d0 a2=0 a3=0 items=0 ppid=2589 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.584000 audit[2817]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd0ef28d0 a2=0 a3=0 items=0 ppid=2589 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:52.589912 kernel: audit: type=1327 audit(1719332572.584:478): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:52.592000 audit[2819]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.592000 audit[2819]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc13a97cc0 a2=0 a3=7ffc13a97cac items=0 ppid=2589 pid=2819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.596964 kernel: audit: type=1325 audit(1719332572.592:479): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.597041 kernel: audit: type=1300 audit(1719332572.592:479): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc13a97cc0 a2=0 a3=7ffc13a97cac items=0 ppid=2589 pid=2819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.597061 kernel: audit: type=1327 audit(1719332572.592:479): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:52.592000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:52.596000 audit[2819]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.599516 kernel: audit: type=1325 audit(1719332572.596:480): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:52.596000 audit[2819]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc13a97cc0 a2=0 a3=0 items=0 ppid=2589 pid=2819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:52.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:52.691323 kubelet[2453]: I0625 16:22:52.691279 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-46h68" podStartSLOduration=4.982547587 podStartE2EDuration="6.691268842s" podCreationTimestamp="2024-06-25 16:22:46 +0000 UTC" firstStartedPulling="2024-06-25 16:22:48.189952341 +0000 UTC m=+16.830993560" lastFinishedPulling="2024-06-25 16:22:49.898673596 +0000 UTC m=+18.539714815" observedRunningTime="2024-06-25 16:22:50.525627208 +0000 UTC m=+19.166668430" watchObservedRunningTime="2024-06-25 16:22:52.691268842 +0000 UTC m=+21.332310065" Jun 25 16:22:52.691878 kubelet[2453]: I0625 16:22:52.691862 2453 topology_manager.go:215] "Topology Admit Handler" podUID="1368514c-0b94-4150-be4d-d08687c4a876" podNamespace="calico-system" podName="calico-typha-6dd598654-89qd9" Jun 25 16:22:52.695284 systemd[1]: Created slice kubepods-besteffort-pod1368514c_0b94_4150_be4d_d08687c4a876.slice - libcontainer container kubepods-besteffort-pod1368514c_0b94_4150_be4d_d08687c4a876.slice. Jun 25 16:22:52.747182 kubelet[2453]: I0625 16:22:52.747159 2453 topology_manager.go:215] "Topology Admit Handler" podUID="e523d184-387b-453d-ad2d-605c20a32836" podNamespace="calico-system" podName="calico-node-z8g5h" Jun 25 16:22:52.751521 systemd[1]: Created slice kubepods-besteffort-pode523d184_387b_453d_ad2d_605c20a32836.slice - libcontainer container kubepods-besteffort-pode523d184_387b_453d_ad2d_605c20a32836.slice. Jun 25 16:22:52.792969 kubelet[2453]: I0625 16:22:52.792939 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-var-run-calico\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793140 kubelet[2453]: I0625 16:22:52.793128 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-cni-bin-dir\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793203 kubelet[2453]: I0625 16:22:52.793195 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-lib-modules\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793251 kubelet[2453]: I0625 16:22:52.793244 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-policysync\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793303 kubelet[2453]: I0625 16:22:52.793295 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e523d184-387b-453d-ad2d-605c20a32836-tigera-ca-bundle\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793349 kubelet[2453]: I0625 16:22:52.793341 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-flexvol-driver-host\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793410 kubelet[2453]: I0625 16:22:52.793402 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1368514c-0b94-4150-be4d-d08687c4a876-tigera-ca-bundle\") pod \"calico-typha-6dd598654-89qd9\" (UID: \"1368514c-0b94-4150-be4d-d08687c4a876\") " pod="calico-system/calico-typha-6dd598654-89qd9" Jun 25 16:22:52.793454 kubelet[2453]: I0625 16:22:52.793448 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-var-lib-calico\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793527 kubelet[2453]: I0625 16:22:52.793520 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-cni-log-dir\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793594 kubelet[2453]: I0625 16:22:52.793570 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1368514c-0b94-4150-be4d-d08687c4a876-typha-certs\") pod \"calico-typha-6dd598654-89qd9\" (UID: \"1368514c-0b94-4150-be4d-d08687c4a876\") " pod="calico-system/calico-typha-6dd598654-89qd9" Jun 25 16:22:52.793681 kubelet[2453]: I0625 16:22:52.793672 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4jm\" (UniqueName: \"kubernetes.io/projected/1368514c-0b94-4150-be4d-d08687c4a876-kube-api-access-kk4jm\") pod \"calico-typha-6dd598654-89qd9\" (UID: \"1368514c-0b94-4150-be4d-d08687c4a876\") " pod="calico-system/calico-typha-6dd598654-89qd9" Jun 25 16:22:52.793754 kubelet[2453]: I0625 16:22:52.793746 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e523d184-387b-453d-ad2d-605c20a32836-node-certs\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793828 kubelet[2453]: I0625 16:22:52.793821 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-cni-net-dir\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793915 kubelet[2453]: I0625 16:22:52.793907 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e523d184-387b-453d-ad2d-605c20a32836-xtables-lock\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.793988 kubelet[2453]: I0625 16:22:52.793981 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7h7t\" (UniqueName: \"kubernetes.io/projected/e523d184-387b-453d-ad2d-605c20a32836-kube-api-access-j7h7t\") pod \"calico-node-z8g5h\" (UID: \"e523d184-387b-453d-ad2d-605c20a32836\") " pod="calico-system/calico-node-z8g5h" Jun 25 16:22:52.895508 kubelet[2453]: I0625 16:22:52.895432 2453 topology_manager.go:215] "Topology Admit Handler" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" podNamespace="calico-system" podName="csi-node-driver-8kv5x" Jun 25 16:22:52.898682 kubelet[2453]: E0625 16:22:52.898659 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8kv5x" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" Jun 25 16:22:52.924496 kubelet[2453]: E0625 16:22:52.923539 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.924496 kubelet[2453]: W0625 16:22:52.923551 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.924496 kubelet[2453]: E0625 16:22:52.923572 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.924496 kubelet[2453]: E0625 16:22:52.923688 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.924496 kubelet[2453]: W0625 16:22:52.923692 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.924496 kubelet[2453]: E0625 16:22:52.923698 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.958364 kubelet[2453]: E0625 16:22:52.958351 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.958454 kubelet[2453]: W0625 16:22:52.958445 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.958514 kubelet[2453]: E0625 16:22:52.958507 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.978663 kubelet[2453]: E0625 16:22:52.978649 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.978760 kubelet[2453]: W0625 16:22:52.978752 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.978911 kubelet[2453]: E0625 16:22:52.978903 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.982854 kubelet[2453]: E0625 16:22:52.982842 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.982906 kubelet[2453]: W0625 16:22:52.982900 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.982955 kubelet[2453]: E0625 16:22:52.982948 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.983089 kubelet[2453]: E0625 16:22:52.983083 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.983133 kubelet[2453]: W0625 16:22:52.983126 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.983176 kubelet[2453]: E0625 16:22:52.983169 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.983334 kubelet[2453]: E0625 16:22:52.983327 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.983403 kubelet[2453]: W0625 16:22:52.983393 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.990656 kubelet[2453]: E0625 16:22:52.983504 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.990656 kubelet[2453]: E0625 16:22:52.984055 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.990656 kubelet[2453]: W0625 16:22:52.984061 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.990656 kubelet[2453]: E0625 16:22:52.984068 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.990656 kubelet[2453]: E0625 16:22:52.984223 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.990656 kubelet[2453]: W0625 16:22:52.984228 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.990656 kubelet[2453]: E0625 16:22:52.984234 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.990656 kubelet[2453]: E0625 16:22:52.984550 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.990656 kubelet[2453]: W0625 16:22:52.984557 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.990656 kubelet[2453]: E0625 16:22:52.984563 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.990852 kubelet[2453]: E0625 16:22:52.984705 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.990852 kubelet[2453]: W0625 16:22:52.984710 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.990852 kubelet[2453]: E0625 16:22:52.984717 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.990852 kubelet[2453]: E0625 16:22:52.985532 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.990852 kubelet[2453]: W0625 16:22:52.985538 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.990852 kubelet[2453]: E0625 16:22:52.985546 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.990852 kubelet[2453]: E0625 16:22:52.985657 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.990852 kubelet[2453]: W0625 16:22:52.985662 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.990852 kubelet[2453]: E0625 16:22:52.985667 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.990852 kubelet[2453]: E0625 16:22:52.985770 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991021 kubelet[2453]: W0625 16:22:52.985775 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991021 kubelet[2453]: E0625 16:22:52.985780 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991021 kubelet[2453]: E0625 16:22:52.985880 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991021 kubelet[2453]: W0625 16:22:52.985884 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991021 kubelet[2453]: E0625 16:22:52.985889 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991021 kubelet[2453]: E0625 16:22:52.985992 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991021 kubelet[2453]: W0625 16:22:52.985996 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991021 kubelet[2453]: E0625 16:22:52.986001 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991021 kubelet[2453]: E0625 16:22:52.986096 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991021 kubelet[2453]: W0625 16:22:52.986101 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991183 kubelet[2453]: E0625 16:22:52.986105 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991183 kubelet[2453]: E0625 16:22:52.986199 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991183 kubelet[2453]: W0625 16:22:52.986203 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991183 kubelet[2453]: E0625 16:22:52.986209 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991183 kubelet[2453]: E0625 16:22:52.986301 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991183 kubelet[2453]: W0625 16:22:52.986306 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991183 kubelet[2453]: E0625 16:22:52.986310 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991183 kubelet[2453]: E0625 16:22:52.986400 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991183 kubelet[2453]: W0625 16:22:52.986405 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991183 kubelet[2453]: E0625 16:22:52.986410 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991347 kubelet[2453]: E0625 16:22:52.986518 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991347 kubelet[2453]: W0625 16:22:52.986522 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991347 kubelet[2453]: E0625 16:22:52.986527 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991347 kubelet[2453]: E0625 16:22:52.986606 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991347 kubelet[2453]: W0625 16:22:52.986611 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991347 kubelet[2453]: E0625 16:22:52.986615 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991347 kubelet[2453]: E0625 16:22:52.986695 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991347 kubelet[2453]: W0625 16:22:52.986700 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991347 kubelet[2453]: E0625 16:22:52.986704 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.991347 kubelet[2453]: E0625 16:22:52.986791 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.991528 kubelet[2453]: W0625 16:22:52.986796 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.991528 kubelet[2453]: E0625 16:22:52.986800 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.996033 kubelet[2453]: E0625 16:22:52.996024 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.996077 kubelet[2453]: W0625 16:22:52.996070 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.996128 kubelet[2453]: E0625 16:22:52.996121 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.996189 kubelet[2453]: I0625 16:22:52.996178 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0ac2a4fe-1895-4a84-986f-bb41e2524a94-varrun\") pod \"csi-node-driver-8kv5x\" (UID: \"0ac2a4fe-1895-4a84-986f-bb41e2524a94\") " pod="calico-system/csi-node-driver-8kv5x" Jun 25 16:22:52.996361 kubelet[2453]: E0625 16:22:52.996348 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.996361 kubelet[2453]: W0625 16:22:52.996358 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.996413 kubelet[2453]: E0625 16:22:52.996375 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.996515 kubelet[2453]: E0625 16:22:52.996506 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.996515 kubelet[2453]: W0625 16:22:52.996512 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.996568 kubelet[2453]: E0625 16:22:52.996521 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.996959 containerd[1339]: time="2024-06-25T16:22:52.996941210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dd598654-89qd9,Uid:1368514c-0b94-4150-be4d-d08687c4a876,Namespace:calico-system,Attempt:0,}" Jun 25 16:22:52.997178 kubelet[2453]: E0625 16:22:52.997172 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.997235 kubelet[2453]: W0625 16:22:52.997226 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.997282 kubelet[2453]: E0625 16:22:52.997275 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.997349 kubelet[2453]: I0625 16:22:52.997341 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0ac2a4fe-1895-4a84-986f-bb41e2524a94-registration-dir\") pod \"csi-node-driver-8kv5x\" (UID: \"0ac2a4fe-1895-4a84-986f-bb41e2524a94\") " pod="calico-system/csi-node-driver-8kv5x" Jun 25 16:22:52.997499 kubelet[2453]: E0625 16:22:52.997471 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.997499 kubelet[2453]: W0625 16:22:52.997479 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.997499 kubelet[2453]: E0625 16:22:52.997499 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.997595 kubelet[2453]: E0625 16:22:52.997585 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.997595 kubelet[2453]: W0625 16:22:52.997592 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.997655 kubelet[2453]: E0625 16:22:52.997597 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.997754 kubelet[2453]: E0625 16:22:52.997745 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.997754 kubelet[2453]: W0625 16:22:52.997751 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.997754 kubelet[2453]: E0625 16:22:52.997757 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.997837 kubelet[2453]: I0625 16:22:52.997768 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ac2a4fe-1895-4a84-986f-bb41e2524a94-kubelet-dir\") pod \"csi-node-driver-8kv5x\" (UID: \"0ac2a4fe-1895-4a84-986f-bb41e2524a94\") " pod="calico-system/csi-node-driver-8kv5x" Jun 25 16:22:52.997869 kubelet[2453]: E0625 16:22:52.997863 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.997869 kubelet[2453]: W0625 16:22:52.997868 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.997907 kubelet[2453]: E0625 16:22:52.997873 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.997907 kubelet[2453]: I0625 16:22:52.997881 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0ac2a4fe-1895-4a84-986f-bb41e2524a94-socket-dir\") pod \"csi-node-driver-8kv5x\" (UID: \"0ac2a4fe-1895-4a84-986f-bb41e2524a94\") " pod="calico-system/csi-node-driver-8kv5x" Jun 25 16:22:52.997969 kubelet[2453]: E0625 16:22:52.997961 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.997969 kubelet[2453]: W0625 16:22:52.997967 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.998014 kubelet[2453]: E0625 16:22:52.997972 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.998014 kubelet[2453]: I0625 16:22:52.997982 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkxwl\" (UniqueName: \"kubernetes.io/projected/0ac2a4fe-1895-4a84-986f-bb41e2524a94-kube-api-access-jkxwl\") pod \"csi-node-driver-8kv5x\" (UID: \"0ac2a4fe-1895-4a84-986f-bb41e2524a94\") " pod="calico-system/csi-node-driver-8kv5x" Jun 25 16:22:52.998150 kubelet[2453]: E0625 16:22:52.998139 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.998150 kubelet[2453]: W0625 16:22:52.998146 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.998206 kubelet[2453]: E0625 16:22:52.998155 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.998248 kubelet[2453]: E0625 16:22:52.998238 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.998248 kubelet[2453]: W0625 16:22:52.998244 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.998303 kubelet[2453]: E0625 16:22:52.998252 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.998412 kubelet[2453]: E0625 16:22:52.998402 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.998412 kubelet[2453]: W0625 16:22:52.998410 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.998465 kubelet[2453]: E0625 16:22:52.998417 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.998526 kubelet[2453]: E0625 16:22:52.998515 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.998526 kubelet[2453]: W0625 16:22:52.998523 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.998598 kubelet[2453]: E0625 16:22:52.998530 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.998623 kubelet[2453]: E0625 16:22:52.998619 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.998643 kubelet[2453]: W0625 16:22:52.998623 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.998643 kubelet[2453]: E0625 16:22:52.998628 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:52.998714 kubelet[2453]: E0625 16:22:52.998706 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:52.998714 kubelet[2453]: W0625 16:22:52.998713 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:52.998772 kubelet[2453]: E0625 16:22:52.998718 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.054278 containerd[1339]: time="2024-06-25T16:22:53.054254096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8g5h,Uid:e523d184-387b-453d-ad2d-605c20a32836,Namespace:calico-system,Attempt:0,}" Jun 25 16:22:53.058958 containerd[1339]: time="2024-06-25T16:22:53.058901606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:53.058958 containerd[1339]: time="2024-06-25T16:22:53.058937824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:53.059085 containerd[1339]: time="2024-06-25T16:22:53.058947785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:53.059142 containerd[1339]: time="2024-06-25T16:22:53.059079272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:53.099126 kubelet[2453]: E0625 16:22:53.099104 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099126 kubelet[2453]: W0625 16:22:53.099119 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099242 kubelet[2453]: E0625 16:22:53.099133 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.099265 kubelet[2453]: E0625 16:22:53.099244 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099265 kubelet[2453]: W0625 16:22:53.099249 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099265 kubelet[2453]: E0625 16:22:53.099254 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.099365 kubelet[2453]: E0625 16:22:53.099358 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099365 kubelet[2453]: W0625 16:22:53.099363 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099402 kubelet[2453]: E0625 16:22:53.099368 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.099476 kubelet[2453]: E0625 16:22:53.099457 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099476 kubelet[2453]: W0625 16:22:53.099465 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099476 kubelet[2453]: E0625 16:22:53.099472 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.099580 kubelet[2453]: E0625 16:22:53.099572 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099606 kubelet[2453]: W0625 16:22:53.099578 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099606 kubelet[2453]: E0625 16:22:53.099589 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.099703 kubelet[2453]: E0625 16:22:53.099692 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099703 kubelet[2453]: W0625 16:22:53.099698 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099754 kubelet[2453]: E0625 16:22:53.099705 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.099801 kubelet[2453]: E0625 16:22:53.099791 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099801 kubelet[2453]: W0625 16:22:53.099797 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099848 kubelet[2453]: E0625 16:22:53.099810 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.099899 kubelet[2453]: E0625 16:22:53.099891 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.099899 kubelet[2453]: W0625 16:22:53.099897 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.099943 kubelet[2453]: E0625 16:22:53.099907 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100021 kubelet[2453]: E0625 16:22:53.100011 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100021 kubelet[2453]: W0625 16:22:53.100018 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100067 kubelet[2453]: E0625 16:22:53.100028 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100125 kubelet[2453]: E0625 16:22:53.100116 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100125 kubelet[2453]: W0625 16:22:53.100122 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100170 kubelet[2453]: E0625 16:22:53.100129 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100219 kubelet[2453]: E0625 16:22:53.100210 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100219 kubelet[2453]: W0625 16:22:53.100216 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100261 kubelet[2453]: E0625 16:22:53.100223 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100512 kubelet[2453]: E0625 16:22:53.100300 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100512 kubelet[2453]: W0625 16:22:53.100306 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100512 kubelet[2453]: E0625 16:22:53.100316 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100512 kubelet[2453]: E0625 16:22:53.100442 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100512 kubelet[2453]: W0625 16:22:53.100448 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100512 kubelet[2453]: E0625 16:22:53.100453 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100638 kubelet[2453]: E0625 16:22:53.100544 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100638 kubelet[2453]: W0625 16:22:53.100548 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100638 kubelet[2453]: E0625 16:22:53.100553 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100638 kubelet[2453]: E0625 16:22:53.100627 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100638 kubelet[2453]: W0625 16:22:53.100631 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100638 kubelet[2453]: E0625 16:22:53.100636 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100739 kubelet[2453]: E0625 16:22:53.100727 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100739 kubelet[2453]: W0625 16:22:53.100731 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100739 kubelet[2453]: E0625 16:22:53.100735 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100844 kubelet[2453]: E0625 16:22:53.100834 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100844 kubelet[2453]: W0625 16:22:53.100840 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100903 kubelet[2453]: E0625 16:22:53.100853 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.100960 kubelet[2453]: E0625 16:22:53.100928 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.100960 kubelet[2453]: W0625 16:22:53.100934 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.100960 kubelet[2453]: E0625 16:22:53.100938 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.101026 kubelet[2453]: E0625 16:22:53.101016 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.101026 kubelet[2453]: W0625 16:22:53.101020 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.101026 kubelet[2453]: E0625 16:22:53.101024 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.101125 kubelet[2453]: E0625 16:22:53.101115 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.101125 kubelet[2453]: W0625 16:22:53.101120 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.101309 kubelet[2453]: E0625 16:22:53.101189 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.101309 kubelet[2453]: E0625 16:22:53.101212 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.101309 kubelet[2453]: W0625 16:22:53.101217 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.101309 kubelet[2453]: E0625 16:22:53.101222 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.101309 kubelet[2453]: E0625 16:22:53.101301 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.101309 kubelet[2453]: W0625 16:22:53.101306 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.101309 kubelet[2453]: E0625 16:22:53.101310 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.101442 kubelet[2453]: E0625 16:22:53.101386 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.101442 kubelet[2453]: W0625 16:22:53.101392 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.101442 kubelet[2453]: E0625 16:22:53.101397 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.101509 kubelet[2453]: E0625 16:22:53.101500 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.101509 kubelet[2453]: W0625 16:22:53.101507 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.101565 kubelet[2453]: E0625 16:22:53.101512 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.106437 kubelet[2453]: E0625 16:22:53.106417 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.106437 kubelet[2453]: W0625 16:22:53.106430 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.106437 kubelet[2453]: E0625 16:22:53.106444 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.106566 kubelet[2453]: E0625 16:22:53.106557 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:53.106566 kubelet[2453]: W0625 16:22:53.106563 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:53.106610 kubelet[2453]: E0625 16:22:53.106569 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:53.123584 systemd[1]: Started cri-containerd-c2d9cf09413644041a5fe7c98e589c825953bd9868a3147294c9bf92f0250471.scope - libcontainer container c2d9cf09413644041a5fe7c98e589c825953bd9868a3147294c9bf92f0250471. Jun 25 16:22:53.132000 audit: BPF prog-id=117 op=LOAD Jun 25 16:22:53.132000 audit: BPF prog-id=118 op=LOAD Jun 25 16:22:53.132000 audit[2883]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2873 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643963663039343133363434303431613566653763393865353839 Jun 25 16:22:53.132000 audit: BPF prog-id=119 op=LOAD Jun 25 16:22:53.132000 audit[2883]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2873 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643963663039343133363434303431613566653763393865353839 Jun 25 16:22:53.132000 audit: BPF prog-id=119 op=UNLOAD Jun 25 16:22:53.132000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:22:53.132000 audit: BPF prog-id=120 op=LOAD Jun 25 16:22:53.132000 audit[2883]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2873 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643963663039343133363434303431613566653763393865353839 Jun 25 16:22:53.163950 containerd[1339]: time="2024-06-25T16:22:53.163873397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:22:53.164104 containerd[1339]: time="2024-06-25T16:22:53.164090493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:53.164172 containerd[1339]: time="2024-06-25T16:22:53.164161183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:22:53.164225 containerd[1339]: time="2024-06-25T16:22:53.164207332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:22:53.172312 containerd[1339]: time="2024-06-25T16:22:53.172279747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dd598654-89qd9,Uid:1368514c-0b94-4150-be4d-d08687c4a876,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2d9cf09413644041a5fe7c98e589c825953bd9868a3147294c9bf92f0250471\"" Jun 25 16:22:53.173393 containerd[1339]: time="2024-06-25T16:22:53.173381127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:22:53.183593 systemd[1]: Started cri-containerd-13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d.scope - libcontainer container 13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d. Jun 25 16:22:53.192000 audit: BPF prog-id=121 op=LOAD Jun 25 16:22:53.193000 audit: BPF prog-id=122 op=LOAD Jun 25 16:22:53.193000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2936 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133313834663564386633616233363732666631353334643464623538 Jun 25 16:22:53.193000 audit: BPF prog-id=123 op=LOAD Jun 25 16:22:53.193000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2936 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133313834663564386633616233363732666631353334643464623538 Jun 25 16:22:53.193000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:22:53.193000 audit: BPF prog-id=122 op=UNLOAD Jun 25 16:22:53.193000 audit: BPF prog-id=124 op=LOAD Jun 25 16:22:53.193000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2936 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133313834663564386633616233363732666631353334643464623538 Jun 25 16:22:53.207371 containerd[1339]: time="2024-06-25T16:22:53.207346722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8g5h,Uid:e523d184-387b-453d-ad2d-605c20a32836,Namespace:calico-system,Attempt:0,} returns sandbox id \"13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d\"" Jun 25 16:22:53.602000 audit[2974]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:53.602000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe5fc58c80 a2=0 a3=7ffe5fc58c6c items=0 ppid=2589 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:53.603000 audit[2974]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:53.603000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5fc58c80 a2=0 a3=0 items=0 ppid=2589 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:53.603000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:54.471865 kubelet[2453]: E0625 16:22:54.471834 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8kv5x" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" Jun 25 16:22:55.308652 containerd[1339]: time="2024-06-25T16:22:55.308620225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:55.311764 containerd[1339]: time="2024-06-25T16:22:55.311735903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:22:55.318605 containerd[1339]: time="2024-06-25T16:22:55.318590223Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:55.334592 containerd[1339]: time="2024-06-25T16:22:55.334496695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.161014818s" Jun 25 16:22:55.334592 containerd[1339]: time="2024-06-25T16:22:55.334519402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:22:55.343317 containerd[1339]: time="2024-06-25T16:22:55.343289835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:22:55.346540 containerd[1339]: time="2024-06-25T16:22:55.346329910Z" level=info msg="CreateContainer within sandbox \"c2d9cf09413644041a5fe7c98e589c825953bd9868a3147294c9bf92f0250471\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:22:55.352431 containerd[1339]: time="2024-06-25T16:22:55.352410558Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:55.353001 containerd[1339]: time="2024-06-25T16:22:55.352988703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:55.515599 containerd[1339]: time="2024-06-25T16:22:55.515566817Z" level=info msg="CreateContainer within sandbox \"c2d9cf09413644041a5fe7c98e589c825953bd9868a3147294c9bf92f0250471\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3599f7268ca7124e41793e61852869e333201c5dac67c03b5c48486889991611\"" Jun 25 16:22:55.515895 containerd[1339]: time="2024-06-25T16:22:55.515881328Z" level=info msg="StartContainer for \"3599f7268ca7124e41793e61852869e333201c5dac67c03b5c48486889991611\"" Jun 25 16:22:55.554603 systemd[1]: Started cri-containerd-3599f7268ca7124e41793e61852869e333201c5dac67c03b5c48486889991611.scope - libcontainer container 3599f7268ca7124e41793e61852869e333201c5dac67c03b5c48486889991611. Jun 25 16:22:55.565000 audit: BPF prog-id=125 op=LOAD Jun 25 16:22:55.565000 audit: BPF prog-id=126 op=LOAD Jun 25 16:22:55.565000 audit[2989]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2873 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393966373236386361373132346534313739336536313835323836 Jun 25 16:22:55.565000 audit: BPF prog-id=127 op=LOAD Jun 25 16:22:55.565000 audit[2989]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2873 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393966373236386361373132346534313739336536313835323836 Jun 25 16:22:55.565000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:22:55.565000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:22:55.565000 audit: BPF prog-id=128 op=LOAD Jun 25 16:22:55.565000 audit[2989]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2873 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393966373236386361373132346534313739336536313835323836 Jun 25 16:22:55.587809 containerd[1339]: time="2024-06-25T16:22:55.587784068Z" level=info msg="StartContainer for \"3599f7268ca7124e41793e61852869e333201c5dac67c03b5c48486889991611\" returns successfully" Jun 25 16:22:56.472004 kubelet[2453]: E0625 16:22:56.471790 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8kv5x" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" Jun 25 16:22:56.570409 kubelet[2453]: I0625 16:22:56.570378 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dd598654-89qd9" podStartSLOduration=2.408364559 podStartE2EDuration="4.570367535s" podCreationTimestamp="2024-06-25 16:22:52 +0000 UTC" firstStartedPulling="2024-06-25 16:22:53.173099322 +0000 UTC m=+21.814140541" lastFinishedPulling="2024-06-25 16:22:55.335102298 +0000 UTC m=+23.976143517" observedRunningTime="2024-06-25 16:22:56.539929126 +0000 UTC m=+25.180970349" watchObservedRunningTime="2024-06-25 16:22:56.570367535 +0000 UTC m=+25.211408757" Jun 25 16:22:56.586017 containerd[1339]: time="2024-06-25T16:22:56.585985527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:56.586694 containerd[1339]: time="2024-06-25T16:22:56.586658099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:22:56.587011 containerd[1339]: time="2024-06-25T16:22:56.586994807Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:56.587941 containerd[1339]: time="2024-06-25T16:22:56.587924990Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:56.588783 containerd[1339]: time="2024-06-25T16:22:56.588765573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:56.589560 containerd[1339]: time="2024-06-25T16:22:56.589538789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.246130082s" Jun 25 16:22:56.589604 containerd[1339]: time="2024-06-25T16:22:56.589561057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:22:56.590725 containerd[1339]: time="2024-06-25T16:22:56.590706339Z" level=info msg="CreateContainer within sandbox \"13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:22:56.605525 containerd[1339]: time="2024-06-25T16:22:56.605496229Z" level=info msg="CreateContainer within sandbox \"13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93\"" Jun 25 16:22:56.605910 containerd[1339]: time="2024-06-25T16:22:56.605894175Z" level=info msg="StartContainer for \"fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93\"" Jun 25 16:22:56.604000 audit[3023]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:56.604000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff1c050290 a2=0 a3=7fff1c05027c items=0 ppid=2589 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:56.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:56.609991 kubelet[2453]: E0625 16:22:56.609969 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.609991 kubelet[2453]: W0625 16:22:56.609985 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.610093 kubelet[2453]: E0625 16:22:56.609999 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.610783 kubelet[2453]: E0625 16:22:56.610705 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.610783 kubelet[2453]: W0625 16:22:56.610715 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.610783 kubelet[2453]: E0625 16:22:56.610725 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.611119 kubelet[2453]: E0625 16:22:56.611055 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.611119 kubelet[2453]: W0625 16:22:56.611062 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.611119 kubelet[2453]: E0625 16:22:56.611069 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.611951 kubelet[2453]: E0625 16:22:56.611877 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.611951 kubelet[2453]: W0625 16:22:56.611887 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.611951 kubelet[2453]: E0625 16:22:56.611893 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.606000 audit[3023]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:22:56.613779 kubelet[2453]: E0625 16:22:56.612747 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.613779 kubelet[2453]: W0625 16:22:56.612754 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.613779 kubelet[2453]: E0625 16:22:56.612761 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.613779 kubelet[2453]: E0625 16:22:56.612859 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.613779 kubelet[2453]: W0625 16:22:56.612863 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.613779 kubelet[2453]: E0625 16:22:56.612869 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.613779 kubelet[2453]: E0625 16:22:56.612958 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.613779 kubelet[2453]: W0625 16:22:56.612962 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.613779 kubelet[2453]: E0625 16:22:56.612967 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.613779 kubelet[2453]: E0625 16:22:56.613053 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.613979 kubelet[2453]: W0625 16:22:56.613057 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.613979 kubelet[2453]: E0625 16:22:56.613062 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.613979 kubelet[2453]: E0625 16:22:56.613153 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.613979 kubelet[2453]: W0625 16:22:56.613158 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.613979 kubelet[2453]: E0625 16:22:56.613164 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.613979 kubelet[2453]: E0625 16:22:56.613244 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.613979 kubelet[2453]: W0625 16:22:56.613248 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.613979 kubelet[2453]: E0625 16:22:56.613253 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.613979 kubelet[2453]: E0625 16:22:56.613334 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.613979 kubelet[2453]: W0625 16:22:56.613338 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.614153 kubelet[2453]: E0625 16:22:56.613342 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.614153 kubelet[2453]: E0625 16:22:56.613423 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.614153 kubelet[2453]: W0625 16:22:56.613427 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.614153 kubelet[2453]: E0625 16:22:56.613431 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.614153 kubelet[2453]: E0625 16:22:56.613531 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.614153 kubelet[2453]: W0625 16:22:56.613536 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.614153 kubelet[2453]: E0625 16:22:56.613540 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.614153 kubelet[2453]: E0625 16:22:56.613626 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.614153 kubelet[2453]: W0625 16:22:56.613630 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.614153 kubelet[2453]: E0625 16:22:56.613634 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.606000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff1c050290 a2=0 a3=7fff1c05027c items=0 ppid=2589 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:56.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:22:56.615083 kubelet[2453]: E0625 16:22:56.613729 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.615083 kubelet[2453]: W0625 16:22:56.613734 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.615083 kubelet[2453]: E0625 16:22:56.613740 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.622589 kubelet[2453]: E0625 16:22:56.622453 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.622589 kubelet[2453]: W0625 16:22:56.622469 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.622589 kubelet[2453]: E0625 16:22:56.622507 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.622910 kubelet[2453]: E0625 16:22:56.622820 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.622910 kubelet[2453]: W0625 16:22:56.622827 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.622910 kubelet[2453]: E0625 16:22:56.622834 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.623260 kubelet[2453]: E0625 16:22:56.623169 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.623260 kubelet[2453]: W0625 16:22:56.623185 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.623260 kubelet[2453]: E0625 16:22:56.623194 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.623437 kubelet[2453]: E0625 16:22:56.623372 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.623437 kubelet[2453]: W0625 16:22:56.623377 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.623437 kubelet[2453]: E0625 16:22:56.623385 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.623632 kubelet[2453]: E0625 16:22:56.623574 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.623632 kubelet[2453]: W0625 16:22:56.623580 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.623632 kubelet[2453]: E0625 16:22:56.623587 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.623861 kubelet[2453]: E0625 16:22:56.623781 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.623861 kubelet[2453]: W0625 16:22:56.623788 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.623861 kubelet[2453]: E0625 16:22:56.623797 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.624225 kubelet[2453]: E0625 16:22:56.623969 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.624225 kubelet[2453]: W0625 16:22:56.623975 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.624225 kubelet[2453]: E0625 16:22:56.624018 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.624329 kubelet[2453]: E0625 16:22:56.624323 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.624370 kubelet[2453]: W0625 16:22:56.624363 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.624454 kubelet[2453]: E0625 16:22:56.624447 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.624540 kubelet[2453]: E0625 16:22:56.624535 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.624581 kubelet[2453]: W0625 16:22:56.624575 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.624660 kubelet[2453]: E0625 16:22:56.624654 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.624762 kubelet[2453]: E0625 16:22:56.624757 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.624802 kubelet[2453]: W0625 16:22:56.624795 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.624857 kubelet[2453]: E0625 16:22:56.624850 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.625004 kubelet[2453]: E0625 16:22:56.624998 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.625050 kubelet[2453]: W0625 16:22:56.625044 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.625090 kubelet[2453]: E0625 16:22:56.625084 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.625210 kubelet[2453]: E0625 16:22:56.625205 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.625260 kubelet[2453]: W0625 16:22:56.625253 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.625301 kubelet[2453]: E0625 16:22:56.625295 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.625423 kubelet[2453]: E0625 16:22:56.625418 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.625468 kubelet[2453]: W0625 16:22:56.625462 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.625629 kubelet[2453]: E0625 16:22:56.625622 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.625788 kubelet[2453]: E0625 16:22:56.625778 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.625831 kubelet[2453]: W0625 16:22:56.625825 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.625868 kubelet[2453]: E0625 16:22:56.625862 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.625992 kubelet[2453]: E0625 16:22:56.625987 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.626040 kubelet[2453]: W0625 16:22:56.626032 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.626084 kubelet[2453]: E0625 16:22:56.626075 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.626290 kubelet[2453]: E0625 16:22:56.626284 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.626338 kubelet[2453]: W0625 16:22:56.626332 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.626377 kubelet[2453]: E0625 16:22:56.626372 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.626681 kubelet[2453]: E0625 16:22:56.626675 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.626762 kubelet[2453]: W0625 16:22:56.626755 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.626818 kubelet[2453]: E0625 16:22:56.626812 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.626950 kubelet[2453]: E0625 16:22:56.626945 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:22:56.626991 kubelet[2453]: W0625 16:22:56.626985 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:22:56.627028 kubelet[2453]: E0625 16:22:56.627022 2453 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:22:56.640872 systemd[1]: Started cri-containerd-fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93.scope - libcontainer container fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93. Jun 25 16:22:56.652000 audit: BPF prog-id=129 op=LOAD Jun 25 16:22:56.652000 audit[3066]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2936 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:56.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623130326665363739613561626263653665653032666461353264 Jun 25 16:22:56.652000 audit: BPF prog-id=130 op=LOAD Jun 25 16:22:56.652000 audit[3066]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2936 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:56.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623130326665363739613561626263653665653032666461353264 Jun 25 16:22:56.652000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:22:56.652000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:22:56.652000 audit: BPF prog-id=131 op=LOAD Jun 25 16:22:56.652000 audit[3066]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2936 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:56.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623130326665363739613561626263653665653032666461353264 Jun 25 16:22:56.663013 containerd[1339]: time="2024-06-25T16:22:56.662971514Z" level=info msg="StartContainer for \"fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93\" returns successfully" Jun 25 16:22:56.667370 systemd[1]: cri-containerd-fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93.scope: Deactivated successfully. Jun 25 16:22:56.670000 audit: BPF prog-id=131 op=UNLOAD Jun 25 16:22:56.693008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93-rootfs.mount: Deactivated successfully. Jun 25 16:22:57.137493 containerd[1339]: time="2024-06-25T16:22:57.120096995Z" level=info msg="shim disconnected" id=fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93 namespace=k8s.io Jun 25 16:22:57.137679 containerd[1339]: time="2024-06-25T16:22:57.137656301Z" level=warning msg="cleaning up after shim disconnected" id=fab102fe679a5abbce6ee02fda52de18ee1177fe258098fc19d0b5123f2bbb93 namespace=k8s.io Jun 25 16:22:57.137755 containerd[1339]: time="2024-06-25T16:22:57.137743219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:22:57.529153 containerd[1339]: time="2024-06-25T16:22:57.529096773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:22:58.472902 kubelet[2453]: E0625 16:22:58.472828 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8kv5x" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" Jun 25 16:23:00.472537 kubelet[2453]: E0625 16:23:00.472473 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8kv5x" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" Jun 25 16:23:00.574767 containerd[1339]: time="2024-06-25T16:23:00.574738810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:00.581095 containerd[1339]: time="2024-06-25T16:23:00.581067740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:23:00.587078 containerd[1339]: time="2024-06-25T16:23:00.587057843Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:00.590936 containerd[1339]: time="2024-06-25T16:23:00.590920775Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:00.597470 containerd[1339]: time="2024-06-25T16:23:00.597450408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:00.598384 containerd[1339]: time="2024-06-25T16:23:00.598367489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 3.069094911s" Jun 25 16:23:00.598457 containerd[1339]: time="2024-06-25T16:23:00.598446560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:23:00.600760 containerd[1339]: time="2024-06-25T16:23:00.600728312Z" level=info msg="CreateContainer within sandbox \"13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:23:00.720812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780199446.mount: Deactivated successfully. Jun 25 16:23:00.725149 containerd[1339]: time="2024-06-25T16:23:00.724553358Z" level=info msg="CreateContainer within sandbox \"13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7\"" Jun 25 16:23:00.725613 containerd[1339]: time="2024-06-25T16:23:00.725579514Z" level=info msg="StartContainer for \"989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7\"" Jun 25 16:23:00.773359 systemd[1]: run-containerd-runc-k8s.io-989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7-runc.WWN032.mount: Deactivated successfully. Jun 25 16:23:00.780592 systemd[1]: Started cri-containerd-989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7.scope - libcontainer container 989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7. Jun 25 16:23:00.791190 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:23:00.791256 kernel: audit: type=1334 audit(1719332580.789:509): prog-id=132 op=LOAD Jun 25 16:23:00.789000 audit: BPF prog-id=132 op=LOAD Jun 25 16:23:00.789000 audit[3139]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2936 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.791692 kernel: audit: type=1300 audit(1719332580.789:509): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2936 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938396330333530316564663036656362616334356432613164373831 Jun 25 16:23:00.795621 kernel: audit: type=1327 audit(1719332580.789:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938396330333530316564663036656362616334356432613164373831 Jun 25 16:23:00.798955 kernel: audit: type=1334 audit(1719332580.789:510): prog-id=133 op=LOAD Jun 25 16:23:00.799001 kernel: audit: type=1300 audit(1719332580.789:510): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2936 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.789000 audit: BPF prog-id=133 op=LOAD Jun 25 16:23:00.789000 audit[3139]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2936 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938396330333530316564663036656362616334356432613164373831 Jun 25 16:23:00.801133 kernel: audit: type=1327 audit(1719332580.789:510): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938396330333530316564663036656362616334356432613164373831 Jun 25 16:23:00.789000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:23:00.801775 kernel: audit: type=1334 audit(1719332580.789:511): prog-id=133 op=UNLOAD Jun 25 16:23:00.801820 kernel: audit: type=1334 audit(1719332580.789:512): prog-id=132 op=UNLOAD Jun 25 16:23:00.789000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:23:00.803984 kernel: audit: type=1334 audit(1719332580.789:513): prog-id=134 op=LOAD Jun 25 16:23:00.804028 kernel: audit: type=1300 audit(1719332580.789:513): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2936 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.789000 audit: BPF prog-id=134 op=LOAD Jun 25 16:23:00.789000 audit[3139]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2936 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938396330333530316564663036656362616334356432613164373831 Jun 25 16:23:00.817760 containerd[1339]: time="2024-06-25T16:23:00.817731975Z" level=info msg="StartContainer for \"989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7\" returns successfully" Jun 25 16:23:02.224675 systemd[1]: cri-containerd-989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7.scope: Deactivated successfully. Jun 25 16:23:02.228000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:23:02.255415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7-rootfs.mount: Deactivated successfully. Jun 25 16:23:02.257663 containerd[1339]: time="2024-06-25T16:23:02.257625795Z" level=info msg="shim disconnected" id=989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7 namespace=k8s.io Jun 25 16:23:02.257663 containerd[1339]: time="2024-06-25T16:23:02.257663259Z" level=warning msg="cleaning up after shim disconnected" id=989c03501edf06ecbac45d2a1d7816dbf76d86725163eaf209fa7a6fcf6014c7 namespace=k8s.io Jun 25 16:23:02.257921 containerd[1339]: time="2024-06-25T16:23:02.257670880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:02.269579 kubelet[2453]: I0625 16:23:02.269558 2453 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 16:23:02.285998 kubelet[2453]: I0625 16:23:02.285545 2453 topology_manager.go:215] "Topology Admit Handler" podUID="47c466fb-1e8e-485b-9ff4-0f34f0fde19b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9nn2t" Jun 25 16:23:02.287144 kubelet[2453]: I0625 16:23:02.286957 2453 topology_manager.go:215] "Topology Admit Handler" podUID="90ecef2d-85e5-4ada-bc6e-0ed9c7763599" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6ww22" Jun 25 16:23:02.289861 systemd[1]: Created slice kubepods-burstable-pod47c466fb_1e8e_485b_9ff4_0f34f0fde19b.slice - libcontainer container kubepods-burstable-pod47c466fb_1e8e_485b_9ff4_0f34f0fde19b.slice. Jun 25 16:23:02.293267 systemd[1]: Created slice kubepods-burstable-pod90ecef2d_85e5_4ada_bc6e_0ed9c7763599.slice - libcontainer container kubepods-burstable-pod90ecef2d_85e5_4ada_bc6e_0ed9c7763599.slice. Jun 25 16:23:02.295160 kubelet[2453]: W0625 16:23:02.294986 2453 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 16:23:02.295160 kubelet[2453]: E0625 16:23:02.295019 2453 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 16:23:02.296298 kubelet[2453]: I0625 16:23:02.296282 2453 topology_manager.go:215] "Topology Admit Handler" podUID="9455c0fc-e4a1-4704-970c-3296122d6c00" podNamespace="calico-system" podName="calico-kube-controllers-56785499b5-88q2j" Jun 25 16:23:02.299564 systemd[1]: Created slice kubepods-besteffort-pod9455c0fc_e4a1_4704_970c_3296122d6c00.slice - libcontainer container kubepods-besteffort-pod9455c0fc_e4a1_4704_970c_3296122d6c00.slice. Jun 25 16:23:02.359931 kubelet[2453]: I0625 16:23:02.359904 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxbb\" (UniqueName: \"kubernetes.io/projected/47c466fb-1e8e-485b-9ff4-0f34f0fde19b-kube-api-access-9sxbb\") pod \"coredns-7db6d8ff4d-9nn2t\" (UID: \"47c466fb-1e8e-485b-9ff4-0f34f0fde19b\") " pod="kube-system/coredns-7db6d8ff4d-9nn2t" Jun 25 16:23:02.359931 kubelet[2453]: I0625 16:23:02.359931 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ecef2d-85e5-4ada-bc6e-0ed9c7763599-config-volume\") pod \"coredns-7db6d8ff4d-6ww22\" (UID: \"90ecef2d-85e5-4ada-bc6e-0ed9c7763599\") " pod="kube-system/coredns-7db6d8ff4d-6ww22" Jun 25 16:23:02.360058 kubelet[2453]: I0625 16:23:02.359948 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpkv4\" (UniqueName: \"kubernetes.io/projected/9455c0fc-e4a1-4704-970c-3296122d6c00-kube-api-access-zpkv4\") pod \"calico-kube-controllers-56785499b5-88q2j\" (UID: \"9455c0fc-e4a1-4704-970c-3296122d6c00\") " pod="calico-system/calico-kube-controllers-56785499b5-88q2j" Jun 25 16:23:02.360058 kubelet[2453]: I0625 16:23:02.359992 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9455c0fc-e4a1-4704-970c-3296122d6c00-tigera-ca-bundle\") pod \"calico-kube-controllers-56785499b5-88q2j\" (UID: \"9455c0fc-e4a1-4704-970c-3296122d6c00\") " pod="calico-system/calico-kube-controllers-56785499b5-88q2j" Jun 25 16:23:02.360058 kubelet[2453]: I0625 16:23:02.360006 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47c466fb-1e8e-485b-9ff4-0f34f0fde19b-config-volume\") pod \"coredns-7db6d8ff4d-9nn2t\" (UID: \"47c466fb-1e8e-485b-9ff4-0f34f0fde19b\") " pod="kube-system/coredns-7db6d8ff4d-9nn2t" Jun 25 16:23:02.360058 kubelet[2453]: I0625 16:23:02.360019 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkb4h\" (UniqueName: \"kubernetes.io/projected/90ecef2d-85e5-4ada-bc6e-0ed9c7763599-kube-api-access-nkb4h\") pod \"coredns-7db6d8ff4d-6ww22\" (UID: \"90ecef2d-85e5-4ada-bc6e-0ed9c7763599\") " pod="kube-system/coredns-7db6d8ff4d-6ww22" Jun 25 16:23:02.475819 systemd[1]: Created slice kubepods-besteffort-pod0ac2a4fe_1895_4a84_986f_bb41e2524a94.slice - libcontainer container kubepods-besteffort-pod0ac2a4fe_1895_4a84_986f_bb41e2524a94.slice. Jun 25 16:23:02.477569 containerd[1339]: time="2024-06-25T16:23:02.477532420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8kv5x,Uid:0ac2a4fe-1895-4a84-986f-bb41e2524a94,Namespace:calico-system,Attempt:0,}" Jun 25 16:23:02.548532 containerd[1339]: time="2024-06-25T16:23:02.548387676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:23:02.850358 containerd[1339]: time="2024-06-25T16:23:02.850275244Z" level=error msg="Failed to destroy network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.851864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0-shm.mount: Deactivated successfully. Jun 25 16:23:02.857362 containerd[1339]: time="2024-06-25T16:23:02.852602559Z" level=error msg="encountered an error cleaning up failed sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.857362 containerd[1339]: time="2024-06-25T16:23:02.852665431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8kv5x,Uid:0ac2a4fe-1895-4a84-986f-bb41e2524a94,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.857462 kubelet[2453]: E0625 16:23:02.852795 2453 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.857462 kubelet[2453]: E0625 16:23:02.852837 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8kv5x" Jun 25 16:23:02.857462 kubelet[2453]: E0625 16:23:02.852849 2453 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8kv5x" Jun 25 16:23:02.857556 kubelet[2453]: E0625 16:23:02.852881 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8kv5x_calico-system(0ac2a4fe-1895-4a84-986f-bb41e2524a94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8kv5x_calico-system(0ac2a4fe-1895-4a84-986f-bb41e2524a94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8kv5x" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" Jun 25 16:23:02.902731 containerd[1339]: time="2024-06-25T16:23:02.902704376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56785499b5-88q2j,Uid:9455c0fc-e4a1-4704-970c-3296122d6c00,Namespace:calico-system,Attempt:0,}" Jun 25 16:23:02.940926 containerd[1339]: time="2024-06-25T16:23:02.940883605Z" level=error msg="Failed to destroy network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.941283 containerd[1339]: time="2024-06-25T16:23:02.941263585Z" level=error msg="encountered an error cleaning up failed sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.941383 containerd[1339]: time="2024-06-25T16:23:02.941367572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56785499b5-88q2j,Uid:9455c0fc-e4a1-4704-970c-3296122d6c00,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.941787 kubelet[2453]: E0625 16:23:02.941569 2453 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:02.941787 kubelet[2453]: E0625 16:23:02.941604 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56785499b5-88q2j" Jun 25 16:23:02.941787 kubelet[2453]: E0625 16:23:02.941617 2453 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56785499b5-88q2j" Jun 25 16:23:02.942551 kubelet[2453]: E0625 16:23:02.941646 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56785499b5-88q2j_calico-system(9455c0fc-e4a1-4704-970c-3296122d6c00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56785499b5-88q2j_calico-system(9455c0fc-e4a1-4704-970c-3296122d6c00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56785499b5-88q2j" podUID="9455c0fc-e4a1-4704-970c-3296122d6c00" Jun 25 16:23:03.461820 kubelet[2453]: E0625 16:23:03.461793 2453 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:23:03.462089 kubelet[2453]: E0625 16:23:03.462077 2453 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:23:03.472889 kubelet[2453]: E0625 16:23:03.472868 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90ecef2d-85e5-4ada-bc6e-0ed9c7763599-config-volume podName:90ecef2d-85e5-4ada-bc6e-0ed9c7763599 nodeName:}" failed. No retries permitted until 2024-06-25 16:23:03.961840214 +0000 UTC m=+32.602881433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/90ecef2d-85e5-4ada-bc6e-0ed9c7763599-config-volume") pod "coredns-7db6d8ff4d-6ww22" (UID: "90ecef2d-85e5-4ada-bc6e-0ed9c7763599") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:23:03.473009 kubelet[2453]: E0625 16:23:03.472900 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47c466fb-1e8e-485b-9ff4-0f34f0fde19b-config-volume podName:47c466fb-1e8e-485b-9ff4-0f34f0fde19b nodeName:}" failed. No retries permitted until 2024-06-25 16:23:03.972892715 +0000 UTC m=+32.613933933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/47c466fb-1e8e-485b-9ff4-0f34f0fde19b-config-volume") pod "coredns-7db6d8ff4d-9nn2t" (UID: "47c466fb-1e8e-485b-9ff4-0f34f0fde19b") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:23:03.553209 kubelet[2453]: I0625 16:23:03.553192 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:03.553750 kubelet[2453]: I0625 16:23:03.553734 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:03.563848 containerd[1339]: time="2024-06-25T16:23:03.563824876Z" level=info msg="StopPodSandbox for \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\"" Jun 25 16:23:03.564178 containerd[1339]: time="2024-06-25T16:23:03.563866986Z" level=info msg="StopPodSandbox for \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\"" Jun 25 16:23:03.567669 containerd[1339]: time="2024-06-25T16:23:03.567196015Z" level=info msg="Ensure that sandbox 7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613 in task-service has been cleanup successfully" Jun 25 16:23:03.567767 containerd[1339]: time="2024-06-25T16:23:03.567744451Z" level=info msg="Ensure that sandbox 22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0 in task-service has been cleanup successfully" Jun 25 16:23:03.585011 containerd[1339]: time="2024-06-25T16:23:03.584973012Z" level=error msg="StopPodSandbox for \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\" failed" error="failed to destroy network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:03.585162 kubelet[2453]: E0625 16:23:03.585122 2453 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:03.585215 kubelet[2453]: E0625 16:23:03.585175 2453 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0"} Jun 25 16:23:03.585242 kubelet[2453]: E0625 16:23:03.585231 2453 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ac2a4fe-1895-4a84-986f-bb41e2524a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:03.585289 kubelet[2453]: E0625 16:23:03.585246 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ac2a4fe-1895-4a84-986f-bb41e2524a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8kv5x" podUID="0ac2a4fe-1895-4a84-986f-bb41e2524a94" Jun 25 16:23:03.586697 containerd[1339]: time="2024-06-25T16:23:03.586677566Z" level=error msg="StopPodSandbox for \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\" failed" error="failed to destroy network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:03.586878 kubelet[2453]: E0625 16:23:03.586817 2453 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:03.586878 kubelet[2453]: E0625 16:23:03.586837 2453 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613"} Jun 25 16:23:03.586878 kubelet[2453]: E0625 16:23:03.586852 2453 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9455c0fc-e4a1-4704-970c-3296122d6c00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:03.586878 kubelet[2453]: E0625 16:23:03.586864 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9455c0fc-e4a1-4704-970c-3296122d6c00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56785499b5-88q2j" podUID="9455c0fc-e4a1-4704-970c-3296122d6c00" Jun 25 16:23:04.092433 containerd[1339]: time="2024-06-25T16:23:04.092402360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nn2t,Uid:47c466fb-1e8e-485b-9ff4-0f34f0fde19b,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:04.110813 containerd[1339]: time="2024-06-25T16:23:04.110780240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6ww22,Uid:90ecef2d-85e5-4ada-bc6e-0ed9c7763599,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:04.230749 containerd[1339]: time="2024-06-25T16:23:04.230711441Z" level=error msg="Failed to destroy network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.231539 containerd[1339]: time="2024-06-25T16:23:04.231516468Z" level=error msg="encountered an error cleaning up failed sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.231631 containerd[1339]: time="2024-06-25T16:23:04.231614803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nn2t,Uid:47c466fb-1e8e-485b-9ff4-0f34f0fde19b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.232592 kubelet[2453]: E0625 16:23:04.232548 2453 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.232675 kubelet[2453]: E0625 16:23:04.232657 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9nn2t" Jun 25 16:23:04.232724 kubelet[2453]: E0625 16:23:04.232676 2453 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9nn2t" Jun 25 16:23:04.232790 kubelet[2453]: E0625 16:23:04.232711 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9nn2t_kube-system(47c466fb-1e8e-485b-9ff4-0f34f0fde19b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9nn2t_kube-system(47c466fb-1e8e-485b-9ff4-0f34f0fde19b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9nn2t" podUID="47c466fb-1e8e-485b-9ff4-0f34f0fde19b" Jun 25 16:23:04.257594 containerd[1339]: time="2024-06-25T16:23:04.255607782Z" level=error msg="Failed to destroy network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.257594 containerd[1339]: time="2024-06-25T16:23:04.256070884Z" level=error msg="encountered an error cleaning up failed sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.257594 containerd[1339]: time="2024-06-25T16:23:04.256255237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6ww22,Uid:90ecef2d-85e5-4ada-bc6e-0ed9c7763599,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.258020 kubelet[2453]: E0625 16:23:04.256379 2453 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.258020 kubelet[2453]: E0625 16:23:04.256425 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6ww22" Jun 25 16:23:04.258020 kubelet[2453]: E0625 16:23:04.256441 2453 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6ww22" Jun 25 16:23:04.255722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b-shm.mount: Deactivated successfully. Jun 25 16:23:04.258214 kubelet[2453]: E0625 16:23:04.256466 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6ww22_kube-system(90ecef2d-85e5-4ada-bc6e-0ed9c7763599)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6ww22_kube-system(90ecef2d-85e5-4ada-bc6e-0ed9c7763599)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6ww22" podUID="90ecef2d-85e5-4ada-bc6e-0ed9c7763599" Jun 25 16:23:04.257215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1-shm.mount: Deactivated successfully. Jun 25 16:23:04.555436 kubelet[2453]: I0625 16:23:04.555367 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:04.555871 containerd[1339]: time="2024-06-25T16:23:04.555850696Z" level=info msg="StopPodSandbox for \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\"" Jun 25 16:23:04.556055 containerd[1339]: time="2024-06-25T16:23:04.556043644Z" level=info msg="Ensure that sandbox a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1 in task-service has been cleanup successfully" Jun 25 16:23:04.557466 kubelet[2453]: I0625 16:23:04.557451 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:04.561862 containerd[1339]: time="2024-06-25T16:23:04.557783600Z" level=info msg="StopPodSandbox for \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\"" Jun 25 16:23:04.561862 containerd[1339]: time="2024-06-25T16:23:04.557997502Z" level=info msg="Ensure that sandbox fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b in task-service has been cleanup successfully" Jun 25 16:23:04.580972 containerd[1339]: time="2024-06-25T16:23:04.580936921Z" level=error msg="StopPodSandbox for \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\" failed" error="failed to destroy network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.581420 kubelet[2453]: E0625 16:23:04.581327 2453 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:04.581420 kubelet[2453]: E0625 16:23:04.581358 2453 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1"} Jun 25 16:23:04.581420 kubelet[2453]: E0625 16:23:04.581381 2453 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90ecef2d-85e5-4ada-bc6e-0ed9c7763599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:04.581420 kubelet[2453]: E0625 16:23:04.581397 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90ecef2d-85e5-4ada-bc6e-0ed9c7763599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6ww22" podUID="90ecef2d-85e5-4ada-bc6e-0ed9c7763599" Jun 25 16:23:04.583632 containerd[1339]: time="2024-06-25T16:23:04.583608832Z" level=error msg="StopPodSandbox for \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\" failed" error="failed to destroy network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:04.583832 kubelet[2453]: E0625 16:23:04.583770 2453 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:04.583832 kubelet[2453]: E0625 16:23:04.583789 2453 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b"} Jun 25 16:23:04.583832 kubelet[2453]: E0625 16:23:04.583807 2453 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47c466fb-1e8e-485b-9ff4-0f34f0fde19b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:04.583832 kubelet[2453]: E0625 16:23:04.583818 2453 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47c466fb-1e8e-485b-9ff4-0f34f0fde19b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9nn2t" podUID="47c466fb-1e8e-485b-9ff4-0f34f0fde19b" Jun 25 16:23:06.443348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622922635.mount: Deactivated successfully. Jun 25 16:23:06.544746 containerd[1339]: time="2024-06-25T16:23:06.544716342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:06.549612 containerd[1339]: time="2024-06-25T16:23:06.549567622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:23:06.563399 containerd[1339]: time="2024-06-25T16:23:06.563365217Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:06.568663 containerd[1339]: time="2024-06-25T16:23:06.568619996Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:06.578295 containerd[1339]: time="2024-06-25T16:23:06.578270517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:06.578581 containerd[1339]: time="2024-06-25T16:23:06.578559203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 4.0301418s" Jun 25 16:23:06.578624 containerd[1339]: time="2024-06-25T16:23:06.578582648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:23:06.631132 containerd[1339]: time="2024-06-25T16:23:06.631100767Z" level=info msg="CreateContainer within sandbox \"13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:23:06.664029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042149974.mount: Deactivated successfully. Jun 25 16:23:06.694639 containerd[1339]: time="2024-06-25T16:23:06.694495690Z" level=info msg="CreateContainer within sandbox \"13184f5d8f3ab3672ff1534d4db5834662b297da239dd82893ed1a37a40cb80d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb\"" Jun 25 16:23:06.736952 containerd[1339]: time="2024-06-25T16:23:06.736921772Z" level=info msg="StartContainer for \"60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb\"" Jun 25 16:23:06.854607 systemd[1]: Started cri-containerd-60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb.scope - libcontainer container 60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb. Jun 25 16:23:06.864000 audit: BPF prog-id=135 op=LOAD Jun 25 16:23:06.873724 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:23:06.873761 kernel: audit: type=1334 audit(1719332586.864:515): prog-id=135 op=LOAD Jun 25 16:23:06.873780 kernel: audit: type=1300 audit(1719332586.864:515): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2936 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.873795 kernel: audit: type=1327 audit(1719332586.864:515): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636362313133666363306637356130363065316366303237666565 Jun 25 16:23:06.873812 kernel: audit: type=1334 audit(1719332586.864:516): prog-id=136 op=LOAD Jun 25 16:23:06.873825 kernel: audit: type=1300 audit(1719332586.864:516): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2936 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.881666 kernel: audit: type=1327 audit(1719332586.864:516): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636362313133666363306637356130363065316366303237666565 Jun 25 16:23:06.881711 kernel: audit: type=1334 audit(1719332586.864:517): prog-id=136 op=UNLOAD Jun 25 16:23:06.881728 kernel: audit: type=1334 audit(1719332586.864:518): prog-id=135 op=UNLOAD Jun 25 16:23:06.881742 kernel: audit: type=1334 audit(1719332586.864:519): prog-id=137 op=LOAD Jun 25 16:23:06.881756 kernel: audit: type=1300 audit(1719332586.864:519): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2936 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.864000 audit[3420]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2936 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636362313133666363306637356130363065316366303237666565 Jun 25 16:23:06.864000 audit: BPF prog-id=136 op=LOAD Jun 25 16:23:06.864000 audit[3420]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2936 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636362313133666363306637356130363065316366303237666565 Jun 25 16:23:06.864000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:23:06.864000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:23:06.864000 audit: BPF prog-id=137 op=LOAD Jun 25 16:23:06.864000 audit[3420]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2936 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:06.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636362313133666363306637356130363065316366303237666565 Jun 25 16:23:06.954148 containerd[1339]: time="2024-06-25T16:23:06.954074953Z" level=info msg="StartContainer for \"60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb\" returns successfully" Jun 25 16:23:07.168628 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:23:07.168724 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:23:08.461000 audit[3535]: AVC avc: denied { write } for pid=3535 comm="tee" name="fd" dev="proc" ino=30953 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:08.461000 audit[3535]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd974b8a26 a2=241 a3=1b6 items=1 ppid=3494 pid=3535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:08.461000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:23:08.461000 audit: PATH item=0 name="/dev/fd/63" inode=30950 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:08.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:08.481000 audit[3532]: AVC avc: denied { write } for pid=3532 comm="tee" name="fd" dev="proc" ino=31766 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:08.481000 audit[3532]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0e2d4a16 a2=241 a3=1b6 items=1 ppid=3496 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:08.481000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:23:08.481000 audit: PATH item=0 name="/dev/fd/63" inode=31755 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:08.481000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:08.491000 audit[3550]: AVC avc: denied { write } for pid=3550 comm="tee" name="fd" dev="proc" ino=31777 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:08.494000 audit[3555]: AVC avc: denied { write } for pid=3555 comm="tee" name="fd" dev="proc" ino=31780 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:08.498000 audit[3553]: AVC avc: denied { write } for pid=3553 comm="tee" name="fd" dev="proc" ino=31785 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:08.494000 audit[3555]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc3d9eea17 a2=241 a3=1b6 items=1 ppid=3495 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:08.494000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:23:08.494000 audit: PATH item=0 name="/dev/fd/63" inode=31771 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:08.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:08.491000 audit[3550]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe60f63a28 a2=241 a3=1b6 items=1 ppid=3487 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:08.491000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:23:08.491000 audit: PATH item=0 name="/dev/fd/63" inode=30963 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:08.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:08.498000 audit[3553]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf4bc1a26 a2=241 a3=1b6 items=1 ppid=3491 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:08.498000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:23:08.498000 audit: PATH item=0 name="/dev/fd/63" inode=30966 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:08.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:08.511000 audit[3559]: AVC avc: denied { write } for pid=3559 comm="tee" name="fd" dev="proc" ino=31791 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:08.511000 audit[3559]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf6f4ca27 a2=241 a3=1b6 items=1 ppid=3493 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:08.511000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:23:08.511000 audit: PATH item=0 name="/dev/fd/63" inode=30969 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:08.511000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:08.513000 audit[3563]: AVC avc: denied { write } for pid=3563 comm="tee" name="fd" dev="proc" ino=31795 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:08.513000 audit[3563]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff72818a26 a2=241 a3=1b6 items=1 ppid=3492 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:08.513000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:23:08.513000 audit: PATH item=0 name="/dev/fd/63" inode=30970 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:08.513000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:08.581968 kubelet[2453]: I0625 16:23:08.581931 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:23:09.015568 systemd-networkd[1152]: vxlan.calico: Link UP Jun 25 16:23:09.020000 audit: BPF prog-id=138 op=LOAD Jun 25 16:23:09.020000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4a290700 a2=70 a3=7f80be9e1000 items=0 ppid=3497 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.020000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:09.020000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:23:09.020000 audit: BPF prog-id=139 op=LOAD Jun 25 16:23:09.020000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4a290700 a2=70 a3=6f items=0 ppid=3497 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.020000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:09.020000 audit: BPF prog-id=139 op=UNLOAD Jun 25 16:23:09.020000 audit: BPF prog-id=140 op=LOAD Jun 25 16:23:09.020000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4a290690 a2=70 a3=7fff4a290700 items=0 ppid=3497 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.020000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:09.020000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:23:09.022000 audit: BPF prog-id=141 op=LOAD Jun 25 16:23:09.022000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff4a2906c0 a2=70 a3=0 items=0 ppid=3497 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:09.015572 systemd-networkd[1152]: vxlan.calico: Gained carrier Jun 25 16:23:09.031000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:23:09.187000 audit[3659]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3659 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:09.187000 audit[3659]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd8b0ef4b0 a2=0 a3=7ffd8b0ef49c items=0 ppid=3497 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.187000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:09.202000 audit[3658]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=3658 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:09.202000 audit[3658]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffcbee452a0 a2=0 a3=7ffcbee4528c items=0 ppid=3497 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.202000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:09.203000 audit[3661]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3661 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:09.203000 audit[3661]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe8455a8f0 a2=0 a3=7ffe8455a8dc items=0 ppid=3497 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.203000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:09.204000 audit[3660]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:09.204000 audit[3660]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffcae1e8d30 a2=0 a3=7ffcae1e8d1c items=0 ppid=3497 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:09.204000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:10.646636 systemd-networkd[1152]: vxlan.calico: Gained IPv6LL Jun 25 16:23:14.472404 containerd[1339]: time="2024-06-25T16:23:14.472363501Z" level=info msg="StopPodSandbox for \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\"" Jun 25 16:23:14.548103 kubelet[2453]: I0625 16:23:14.538519 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z8g5h" podStartSLOduration=9.15315541 podStartE2EDuration="22.524066023s" podCreationTimestamp="2024-06-25 16:22:52 +0000 UTC" firstStartedPulling="2024-06-25 16:22:53.208265888 +0000 UTC m=+21.849307107" lastFinishedPulling="2024-06-25 16:23:06.5791765 +0000 UTC m=+35.220217720" observedRunningTime="2024-06-25 16:23:07.648926632 +0000 UTC m=+36.289967864" watchObservedRunningTime="2024-06-25 16:23:14.524066023 +0000 UTC m=+43.165107252" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.522 [INFO][3705] k8s.go 608: Cleaning up netns ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.522 [INFO][3705] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" iface="eth0" netns="/var/run/netns/cni-4074782c-61d9-42ed-2371-b18865f9a57c" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.522 [INFO][3705] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" iface="eth0" netns="/var/run/netns/cni-4074782c-61d9-42ed-2371-b18865f9a57c" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.523 [INFO][3705] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" iface="eth0" netns="/var/run/netns/cni-4074782c-61d9-42ed-2371-b18865f9a57c" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.523 [INFO][3705] k8s.go 615: Releasing IP address(es) ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.523 [INFO][3705] utils.go 188: Calico CNI releasing IP address ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.744 [INFO][3711] ipam_plugin.go 411: Releasing address using handleID ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.745 [INFO][3711] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.745 [INFO][3711] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.754 [WARNING][3711] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.755 [INFO][3711] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.755 [INFO][3711] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:14.758395 containerd[1339]: 2024-06-25 16:23:14.756 [INFO][3705] k8s.go 621: Teardown processing complete. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:14.761586 containerd[1339]: time="2024-06-25T16:23:14.760509114Z" level=info msg="TearDown network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\" successfully" Jun 25 16:23:14.761586 containerd[1339]: time="2024-06-25T16:23:14.760534724Z" level=info msg="StopPodSandbox for \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\" returns successfully" Jun 25 16:23:14.760408 systemd[1]: run-netns-cni\x2d4074782c\x2d61d9\x2d42ed\x2d2371\x2db18865f9a57c.mount: Deactivated successfully. Jun 25 16:23:14.766451 containerd[1339]: time="2024-06-25T16:23:14.766425308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56785499b5-88q2j,Uid:9455c0fc-e4a1-4704-970c-3296122d6c00,Namespace:calico-system,Attempt:1,}" Jun 25 16:23:14.882047 systemd-networkd[1152]: cali9fc66418477: Link UP Jun 25 16:23:14.882611 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:23:14.882706 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9fc66418477: link becomes ready Jun 25 16:23:14.883103 systemd-networkd[1152]: cali9fc66418477: Gained carrier Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.835 [INFO][3718] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0 calico-kube-controllers-56785499b5- calico-system 9455c0fc-e4a1-4704-970c-3296122d6c00 671 0 2024-06-25 16:22:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56785499b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56785499b5-88q2j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9fc66418477 [] []}} ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.835 [INFO][3718] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.857 [INFO][3730] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" HandleID="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.863 [INFO][3730] ipam_plugin.go 264: Auto assigning IP ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" HandleID="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56785499b5-88q2j", "timestamp":"2024-06-25 16:23:14.857987433 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.863 [INFO][3730] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.863 [INFO][3730] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.863 [INFO][3730] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.864 [INFO][3730] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.868 [INFO][3730] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.871 [INFO][3730] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.871 [INFO][3730] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.872 [INFO][3730] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.872 [INFO][3730] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.873 [INFO][3730] ipam.go 1685: Creating new handle: k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.876 [INFO][3730] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.879 [INFO][3730] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.879 [INFO][3730] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" host="localhost" Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.879 [INFO][3730] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:14.889327 containerd[1339]: 2024-06-25 16:23:14.879 [INFO][3730] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" HandleID="k8s-pod-network.903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.891293 containerd[1339]: 2024-06-25 16:23:14.880 [INFO][3718] k8s.go 386: Populated endpoint ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0", GenerateName:"calico-kube-controllers-56785499b5-", Namespace:"calico-system", SelfLink:"", UID:"9455c0fc-e4a1-4704-970c-3296122d6c00", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56785499b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56785499b5-88q2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc66418477", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:14.891293 containerd[1339]: 2024-06-25 16:23:14.880 [INFO][3718] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.891293 containerd[1339]: 2024-06-25 16:23:14.881 [INFO][3718] dataplane_linux.go 68: Setting the host side veth name to cali9fc66418477 ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.891293 containerd[1339]: 2024-06-25 16:23:14.883 [INFO][3718] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.891293 containerd[1339]: 2024-06-25 16:23:14.883 [INFO][3718] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0", GenerateName:"calico-kube-controllers-56785499b5-", Namespace:"calico-system", SelfLink:"", UID:"9455c0fc-e4a1-4704-970c-3296122d6c00", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56785499b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e", Pod:"calico-kube-controllers-56785499b5-88q2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc66418477", MAC:"0e:b8:11:cc:ed:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:14.891293 containerd[1339]: 2024-06-25 16:23:14.888 [INFO][3718] k8s.go 500: Wrote updated endpoint to datastore ContainerID="903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e" Namespace="calico-system" Pod="calico-kube-controllers-56785499b5-88q2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:14.908103 kernel: kauditd_printk_skb: 64 callbacks suppressed Jun 25 16:23:14.908199 kernel: audit: type=1325 audit(1719332594.906:539): table=filter:101 family=2 entries=34 op=nft_register_chain pid=3751 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:14.906000 audit[3751]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3751 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:14.906000 audit[3751]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffd1938af30 a2=0 a3=7ffd1938af1c items=0 ppid=3497 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:14.911344 kernel: audit: type=1300 audit(1719332594.906:539): arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffd1938af30 a2=0 a3=7ffd1938af1c items=0 ppid=3497 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:14.911399 kernel: audit: type=1327 audit(1719332594.906:539): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:14.906000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:14.923592 containerd[1339]: time="2024-06-25T16:23:14.923526068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:14.923730 containerd[1339]: time="2024-06-25T16:23:14.923613078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:14.923730 containerd[1339]: time="2024-06-25T16:23:14.923630282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:14.923730 containerd[1339]: time="2024-06-25T16:23:14.923655013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:14.951704 systemd[1]: Started cri-containerd-903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e.scope - libcontainer container 903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e. Jun 25 16:23:14.970195 kernel: audit: type=1334 audit(1719332594.961:540): prog-id=142 op=LOAD Jun 25 16:23:14.970248 kernel: audit: type=1334 audit(1719332594.961:541): prog-id=143 op=LOAD Jun 25 16:23:14.972373 kernel: audit: type=1300 audit(1719332594.961:541): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3760 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:14.972395 kernel: audit: type=1327 audit(1719332594.961:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930336331383230313830326161336132613032363238643332376664 Jun 25 16:23:14.972412 kernel: audit: type=1334 audit(1719332594.961:542): prog-id=144 op=LOAD Jun 25 16:23:14.972426 kernel: audit: type=1300 audit(1719332594.961:542): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3760 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:14.972441 kernel: audit: type=1327 audit(1719332594.961:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930336331383230313830326161336132613032363238643332376664 Jun 25 16:23:14.961000 audit: BPF prog-id=142 op=LOAD Jun 25 16:23:14.961000 audit: BPF prog-id=143 op=LOAD Jun 25 16:23:14.961000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3760 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:14.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930336331383230313830326161336132613032363238643332376664 Jun 25 16:23:14.961000 audit: BPF prog-id=144 op=LOAD Jun 25 16:23:14.961000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3760 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:14.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930336331383230313830326161336132613032363238643332376664 Jun 25 16:23:14.970325 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:23:14.961000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:23:14.961000 audit: BPF prog-id=143 op=UNLOAD Jun 25 16:23:14.961000 audit: BPF prog-id=145 op=LOAD Jun 25 16:23:14.961000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3760 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:14.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930336331383230313830326161336132613032363238643332376664 Jun 25 16:23:15.000876 containerd[1339]: time="2024-06-25T16:23:15.000579289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56785499b5-88q2j,Uid:9455c0fc-e4a1-4704-970c-3296122d6c00,Namespace:calico-system,Attempt:1,} returns sandbox id \"903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e\"" Jun 25 16:23:15.005990 containerd[1339]: time="2024-06-25T16:23:15.005961739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:23:16.470589 systemd-networkd[1152]: cali9fc66418477: Gained IPv6LL Jun 25 16:23:16.473100 containerd[1339]: time="2024-06-25T16:23:16.473077216Z" level=info msg="StopPodSandbox for \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\"" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.512 [INFO][3808] k8s.go 608: Cleaning up netns ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.512 [INFO][3808] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" iface="eth0" netns="/var/run/netns/cni-f7cd316c-a9ac-fb61-b061-ceb7fba2bbbe" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.512 [INFO][3808] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" iface="eth0" netns="/var/run/netns/cni-f7cd316c-a9ac-fb61-b061-ceb7fba2bbbe" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.512 [INFO][3808] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" iface="eth0" netns="/var/run/netns/cni-f7cd316c-a9ac-fb61-b061-ceb7fba2bbbe" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.512 [INFO][3808] k8s.go 615: Releasing IP address(es) ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.512 [INFO][3808] utils.go 188: Calico CNI releasing IP address ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.533 [INFO][3814] ipam_plugin.go 411: Releasing address using handleID ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.533 [INFO][3814] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.533 [INFO][3814] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.536 [WARNING][3814] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.536 [INFO][3814] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.537 [INFO][3814] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:16.539253 containerd[1339]: 2024-06-25 16:23:16.538 [INFO][3808] k8s.go 621: Teardown processing complete. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:16.542421 containerd[1339]: time="2024-06-25T16:23:16.540705176Z" level=info msg="TearDown network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\" successfully" Jun 25 16:23:16.542421 containerd[1339]: time="2024-06-25T16:23:16.540726657Z" level=info msg="StopPodSandbox for \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\" returns successfully" Jun 25 16:23:16.542421 containerd[1339]: time="2024-06-25T16:23:16.541426800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nn2t,Uid:47c466fb-1e8e-485b-9ff4-0f34f0fde19b,Namespace:kube-system,Attempt:1,}" Jun 25 16:23:16.540667 systemd[1]: run-netns-cni\x2df7cd316c\x2da9ac\x2dfb61\x2db061\x2dceb7fba2bbbe.mount: Deactivated successfully. Jun 25 16:23:16.763415 systemd-networkd[1152]: cali0ff6e61f6a9: Link UP Jun 25 16:23:16.765527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:23:16.765566 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0ff6e61f6a9: link becomes ready Jun 25 16:23:16.765633 systemd-networkd[1152]: cali0ff6e61f6a9: Gained carrier Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.676 [INFO][3825] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0 coredns-7db6d8ff4d- kube-system 47c466fb-1e8e-485b-9ff4-0f34f0fde19b 680 0 2024-06-25 16:22:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-9nn2t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0ff6e61f6a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.676 [INFO][3825] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.730 [INFO][3838] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" HandleID="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.743 [INFO][3838] ipam_plugin.go 264: Auto assigning IP ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" HandleID="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310a20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-9nn2t", "timestamp":"2024-06-25 16:23:16.73069218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.743 [INFO][3838] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.743 [INFO][3838] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.743 [INFO][3838] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.745 [INFO][3838] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.749 [INFO][3838] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.751 [INFO][3838] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.752 [INFO][3838] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.753 [INFO][3838] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.753 [INFO][3838] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.753 [INFO][3838] ipam.go 1685: Creating new handle: k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15 Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.756 [INFO][3838] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.759 [INFO][3838] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.759 [INFO][3838] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" host="localhost" Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.759 [INFO][3838] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:16.775975 containerd[1339]: 2024-06-25 16:23:16.759 [INFO][3838] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" HandleID="k8s-pod-network.9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.776894 containerd[1339]: 2024-06-25 16:23:16.761 [INFO][3825] k8s.go 386: Populated endpoint ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47c466fb-1e8e-485b-9ff4-0f34f0fde19b", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-9nn2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff6e61f6a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:16.776894 containerd[1339]: 2024-06-25 16:23:16.761 [INFO][3825] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.776894 containerd[1339]: 2024-06-25 16:23:16.761 [INFO][3825] dataplane_linux.go 68: Setting the host side veth name to cali0ff6e61f6a9 ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.776894 containerd[1339]: 2024-06-25 16:23:16.767 [INFO][3825] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.776894 containerd[1339]: 2024-06-25 16:23:16.767 [INFO][3825] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47c466fb-1e8e-485b-9ff4-0f34f0fde19b", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15", Pod:"coredns-7db6d8ff4d-9nn2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff6e61f6a9", MAC:"36:77:24:13:0d:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:16.776894 containerd[1339]: 2024-06-25 16:23:16.775 [INFO][3825] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nn2t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:16.787000 audit[3856]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=3856 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:16.787000 audit[3856]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7fff35f15ea0 a2=0 a3=7fff35f15e8c items=0 ppid=3497 pid=3856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:16.787000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:16.817247 containerd[1339]: time="2024-06-25T16:23:16.817147491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:16.817247 containerd[1339]: time="2024-06-25T16:23:16.817174762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:16.817247 containerd[1339]: time="2024-06-25T16:23:16.817183598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:16.817247 containerd[1339]: time="2024-06-25T16:23:16.817190515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:16.830583 systemd[1]: Started cri-containerd-9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15.scope - libcontainer container 9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15. Jun 25 16:23:16.836000 audit: BPF prog-id=146 op=LOAD Jun 25 16:23:16.836000 audit: BPF prog-id=147 op=LOAD Jun 25 16:23:16.836000 audit[3877]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3866 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:16.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966356633376139363638303330353863343636346634633934616336 Jun 25 16:23:16.837000 audit: BPF prog-id=148 op=LOAD Jun 25 16:23:16.837000 audit[3877]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3866 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:16.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966356633376139363638303330353863343636346634633934616336 Jun 25 16:23:16.837000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:23:16.837000 audit: BPF prog-id=147 op=UNLOAD Jun 25 16:23:16.837000 audit: BPF prog-id=149 op=LOAD Jun 25 16:23:16.837000 audit[3877]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3866 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:16.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966356633376139363638303330353863343636346634633934616336 Jun 25 16:23:16.838562 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:23:16.867090 containerd[1339]: time="2024-06-25T16:23:16.867044253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nn2t,Uid:47c466fb-1e8e-485b-9ff4-0f34f0fde19b,Namespace:kube-system,Attempt:1,} returns sandbox id \"9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15\"" Jun 25 16:23:16.870941 containerd[1339]: time="2024-06-25T16:23:16.870917481Z" level=info msg="CreateContainer within sandbox \"9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:23:16.891925 containerd[1339]: time="2024-06-25T16:23:16.891902415Z" level=info msg="CreateContainer within sandbox \"9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a9c20f3f5d30bca267709ec457655876ba390f76643d0f8a265f3edf836f2c5\"" Jun 25 16:23:16.893178 containerd[1339]: time="2024-06-25T16:23:16.893163387Z" level=info msg="StartContainer for \"7a9c20f3f5d30bca267709ec457655876ba390f76643d0f8a265f3edf836f2c5\"" Jun 25 16:23:16.928315 systemd[1]: Started cri-containerd-7a9c20f3f5d30bca267709ec457655876ba390f76643d0f8a265f3edf836f2c5.scope - libcontainer container 7a9c20f3f5d30bca267709ec457655876ba390f76643d0f8a265f3edf836f2c5. Jun 25 16:23:16.938000 audit: BPF prog-id=150 op=LOAD Jun 25 16:23:16.939000 audit: BPF prog-id=151 op=LOAD Jun 25 16:23:16.939000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3866 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396332306633663564333062636132363737303965633435373635 Jun 25 16:23:16.939000 audit: BPF prog-id=152 op=LOAD Jun 25 16:23:16.939000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3866 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396332306633663564333062636132363737303965633435373635 Jun 25 16:23:16.939000 audit: BPF prog-id=152 op=UNLOAD Jun 25 16:23:16.939000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:23:16.939000 audit: BPF prog-id=153 op=LOAD Jun 25 16:23:16.939000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3866 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396332306633663564333062636132363737303965633435373635 Jun 25 16:23:16.958194 containerd[1339]: time="2024-06-25T16:23:16.958162865Z" level=info msg="StartContainer for \"7a9c20f3f5d30bca267709ec457655876ba390f76643d0f8a265f3edf836f2c5\" returns successfully" Jun 25 16:23:17.112968 containerd[1339]: time="2024-06-25T16:23:17.112903275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:17.113877 containerd[1339]: time="2024-06-25T16:23:17.113856384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:23:17.114319 containerd[1339]: time="2024-06-25T16:23:17.114306838Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:17.115667 containerd[1339]: time="2024-06-25T16:23:17.115655506Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:17.116687 containerd[1339]: time="2024-06-25T16:23:17.116675790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:17.117670 containerd[1339]: time="2024-06-25T16:23:17.117649617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.111654924s" Jun 25 16:23:17.117750 containerd[1339]: time="2024-06-25T16:23:17.117738191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:23:17.122366 containerd[1339]: time="2024-06-25T16:23:17.122345865Z" level=info msg="CreateContainer within sandbox \"903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:23:17.135323 containerd[1339]: time="2024-06-25T16:23:17.135297629Z" level=info msg="CreateContainer within sandbox \"903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67\"" Jun 25 16:23:17.135607 containerd[1339]: time="2024-06-25T16:23:17.135595160Z" level=info msg="StartContainer for \"0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67\"" Jun 25 16:23:17.158592 systemd[1]: Started cri-containerd-0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67.scope - libcontainer container 0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67. Jun 25 16:23:17.164000 audit: BPF prog-id=154 op=LOAD Jun 25 16:23:17.165000 audit: BPF prog-id=155 op=LOAD Jun 25 16:23:17.165000 audit[3947]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3760 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636262643066313933633862643339376531323334386561646661 Jun 25 16:23:17.165000 audit: BPF prog-id=156 op=LOAD Jun 25 16:23:17.165000 audit[3947]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3760 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636262643066313933633862643339376531323334386561646661 Jun 25 16:23:17.165000 audit: BPF prog-id=156 op=UNLOAD Jun 25 16:23:17.165000 audit: BPF prog-id=155 op=UNLOAD Jun 25 16:23:17.165000 audit: BPF prog-id=157 op=LOAD Jun 25 16:23:17.165000 audit[3947]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3760 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636262643066313933633862643339376531323334386561646661 Jun 25 16:23:17.204988 containerd[1339]: time="2024-06-25T16:23:17.204957723Z" level=info msg="StartContainer for \"0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67\" returns successfully" Jun 25 16:23:17.473293 containerd[1339]: time="2024-06-25T16:23:17.473262581Z" level=info msg="StopPodSandbox for \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\"" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.510 [INFO][3992] k8s.go 608: Cleaning up netns ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.510 [INFO][3992] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" iface="eth0" netns="/var/run/netns/cni-05c9f30f-442e-7d00-0b40-320539d1c0b4" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.510 [INFO][3992] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" iface="eth0" netns="/var/run/netns/cni-05c9f30f-442e-7d00-0b40-320539d1c0b4" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.511 [INFO][3992] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" iface="eth0" netns="/var/run/netns/cni-05c9f30f-442e-7d00-0b40-320539d1c0b4" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.511 [INFO][3992] k8s.go 615: Releasing IP address(es) ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.511 [INFO][3992] utils.go 188: Calico CNI releasing IP address ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.536 [INFO][3998] ipam_plugin.go 411: Releasing address using handleID ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.536 [INFO][3998] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.536 [INFO][3998] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.545 [WARNING][3998] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.545 [INFO][3998] ipam_plugin.go 439: Releasing address using workloadID ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.545 [INFO][3998] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:17.547643 containerd[1339]: 2024-06-25 16:23:17.546 [INFO][3992] k8s.go 621: Teardown processing complete. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:17.549894 containerd[1339]: time="2024-06-25T16:23:17.549684028Z" level=info msg="TearDown network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\" successfully" Jun 25 16:23:17.549894 containerd[1339]: time="2024-06-25T16:23:17.549708515Z" level=info msg="StopPodSandbox for \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\" returns successfully" Jun 25 16:23:17.549327 systemd[1]: run-netns-cni\x2d05c9f30f\x2d442e\x2d7d00\x2d0b40\x2d320539d1c0b4.mount: Deactivated successfully. Jun 25 16:23:17.550622 containerd[1339]: time="2024-06-25T16:23:17.550608546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8kv5x,Uid:0ac2a4fe-1895-4a84-986f-bb41e2524a94,Namespace:calico-system,Attempt:1,}" Jun 25 16:23:17.595184 kubelet[2453]: I0625 16:23:17.595153 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9nn2t" podStartSLOduration=31.595141181 podStartE2EDuration="31.595141181s" podCreationTimestamp="2024-06-25 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:17.587858226 +0000 UTC m=+46.228899454" watchObservedRunningTime="2024-06-25 16:23:17.595141181 +0000 UTC m=+46.236182403" Jun 25 16:23:17.674688 systemd-networkd[1152]: cali040e8adbb19: Link UP Jun 25 16:23:17.676127 systemd-networkd[1152]: cali040e8adbb19: Gained carrier Jun 25 16:23:17.676523 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali040e8adbb19: link becomes ready Jun 25 16:23:17.687405 kubelet[2453]: I0625 16:23:17.687026 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56785499b5-88q2j" podStartSLOduration=23.570349436 podStartE2EDuration="25.687010839s" podCreationTimestamp="2024-06-25 16:22:52 +0000 UTC" firstStartedPulling="2024-06-25 16:23:15.001471255 +0000 UTC m=+43.642512471" lastFinishedPulling="2024-06-25 16:23:17.118132654 +0000 UTC m=+45.759173874" observedRunningTime="2024-06-25 16:23:17.604361757 +0000 UTC m=+46.245402985" watchObservedRunningTime="2024-06-25 16:23:17.687010839 +0000 UTC m=+46.328052069" Jun 25 16:23:17.699000 audit[4027]: NETFILTER_CFG table=filter:103 family=2 entries=14 op=nft_register_rule pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.599 [INFO][4004] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8kv5x-eth0 csi-node-driver- calico-system 0ac2a4fe-1895-4a84-986f-bb41e2524a94 697 0 2024-06-25 16:22:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-8kv5x eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali040e8adbb19 [] []}} ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.600 [INFO][4004] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.632 [INFO][4016] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" HandleID="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.652 [INFO][4016] ipam_plugin.go 264: Auto assigning IP ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" HandleID="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7b10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8kv5x", "timestamp":"2024-06-25 16:23:17.632128391 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.652 [INFO][4016] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.652 [INFO][4016] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.652 [INFO][4016] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.653 [INFO][4016] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.659 [INFO][4016] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.661 [INFO][4016] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.662 [INFO][4016] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.664 [INFO][4016] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.664 [INFO][4016] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.665 [INFO][4016] ipam.go 1685: Creating new handle: k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.668 [INFO][4016] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.671 [INFO][4016] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.671 [INFO][4016] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" host="localhost" Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.671 [INFO][4016] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:17.700503 containerd[1339]: 2024-06-25 16:23:17.671 [INFO][4016] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" HandleID="k8s-pod-network.b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.701233 containerd[1339]: 2024-06-25 16:23:17.673 [INFO][4004] k8s.go 386: Populated endpoint ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8kv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ac2a4fe-1895-4a84-986f-bb41e2524a94", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8kv5x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali040e8adbb19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:17.701233 containerd[1339]: 2024-06-25 16:23:17.673 [INFO][4004] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.701233 containerd[1339]: 2024-06-25 16:23:17.673 [INFO][4004] dataplane_linux.go 68: Setting the host side veth name to cali040e8adbb19 ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.701233 containerd[1339]: 2024-06-25 16:23:17.676 [INFO][4004] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.701233 containerd[1339]: 2024-06-25 16:23:17.676 [INFO][4004] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8kv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ac2a4fe-1895-4a84-986f-bb41e2524a94", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f", Pod:"csi-node-driver-8kv5x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali040e8adbb19", MAC:"62:00:4e:72:db:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:17.701233 containerd[1339]: 2024-06-25 16:23:17.688 [INFO][4004] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f" Namespace="calico-system" Pod="csi-node-driver-8kv5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:17.699000 audit[4027]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe1b154900 a2=0 a3=7ffe1b1548ec items=0 ppid=2589 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:17.701000 audit[4027]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:17.701000 audit[4027]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe1b154900 a2=0 a3=0 items=0 ppid=2589 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:17.719734 containerd[1339]: time="2024-06-25T16:23:17.719677685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:17.719888 containerd[1339]: time="2024-06-25T16:23:17.719738913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:17.719888 containerd[1339]: time="2024-06-25T16:23:17.719762503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:17.719888 containerd[1339]: time="2024-06-25T16:23:17.719783008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:17.734589 systemd[1]: Started cri-containerd-b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f.scope - libcontainer container b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f. Jun 25 16:23:17.737000 audit[4085]: NETFILTER_CFG table=filter:105 family=2 entries=44 op=nft_register_chain pid=4085 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:17.737000 audit[4085]: SYSCALL arch=c000003e syscall=46 success=yes exit=22680 a0=3 a1=7ffe3be6c660 a2=0 a3=7ffe3be6c64c items=0 ppid=3497 pid=4085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.737000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:17.745000 audit: BPF prog-id=158 op=LOAD Jun 25 16:23:17.746000 audit: BPF prog-id=159 op=LOAD Jun 25 16:23:17.746000 audit[4073]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239616566613139363463626136363430633238336332633039633537 Jun 25 16:23:17.746000 audit: BPF prog-id=160 op=LOAD Jun 25 16:23:17.746000 audit[4073]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239616566613139363463626136363430633238336332633039633537 Jun 25 16:23:17.746000 audit: BPF prog-id=160 op=UNLOAD Jun 25 16:23:17.746000 audit: BPF prog-id=159 op=UNLOAD Jun 25 16:23:17.746000 audit: BPF prog-id=161 op=LOAD Jun 25 16:23:17.746000 audit[4073]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:17.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239616566613139363463626136363430633238336332633039633537 Jun 25 16:23:17.749860 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:23:17.760783 containerd[1339]: time="2024-06-25T16:23:17.760756118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8kv5x,Uid:0ac2a4fe-1895-4a84-986f-bb41e2524a94,Namespace:calico-system,Attempt:1,} returns sandbox id \"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f\"" Jun 25 16:23:17.761716 containerd[1339]: time="2024-06-25T16:23:17.761698979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:23:18.390682 systemd-networkd[1152]: cali0ff6e61f6a9: Gained IPv6LL Jun 25 16:23:18.542091 systemd[1]: run-containerd-runc-k8s.io-0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67-runc.dT9DtW.mount: Deactivated successfully. Jun 25 16:23:18.626000 audit[4099]: NETFILTER_CFG table=filter:106 family=2 entries=11 op=nft_register_rule pid=4099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:18.626000 audit[4099]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc9409ea40 a2=0 a3=7ffc9409ea2c items=0 ppid=2589 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:18.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:18.627000 audit[4099]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=4099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:18.627000 audit[4099]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc9409ea40 a2=0 a3=7ffc9409ea2c items=0 ppid=2589 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:18.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:19.042342 containerd[1339]: time="2024-06-25T16:23:19.042312528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:19.046744 containerd[1339]: time="2024-06-25T16:23:19.046716690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:23:19.052717 containerd[1339]: time="2024-06-25T16:23:19.052697718Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:19.067424 containerd[1339]: time="2024-06-25T16:23:19.067396606Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:19.070170 containerd[1339]: time="2024-06-25T16:23:19.070149742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:19.071091 containerd[1339]: time="2024-06-25T16:23:19.071070132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.309350508s" Jun 25 16:23:19.071168 containerd[1339]: time="2024-06-25T16:23:19.071153086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:23:19.074615 containerd[1339]: time="2024-06-25T16:23:19.074545285Z" level=info msg="CreateContainer within sandbox \"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:23:19.097107 containerd[1339]: time="2024-06-25T16:23:19.097077215Z" level=info msg="CreateContainer within sandbox \"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f30e63d35e6e6bf88dcdf0a91e5c2e3145a46b618189c092401db4f85475d599\"" Jun 25 16:23:19.097810 containerd[1339]: time="2024-06-25T16:23:19.097792543Z" level=info msg="StartContainer for \"f30e63d35e6e6bf88dcdf0a91e5c2e3145a46b618189c092401db4f85475d599\"" Jun 25 16:23:19.121590 systemd[1]: Started cri-containerd-f30e63d35e6e6bf88dcdf0a91e5c2e3145a46b618189c092401db4f85475d599.scope - libcontainer container f30e63d35e6e6bf88dcdf0a91e5c2e3145a46b618189c092401db4f85475d599. Jun 25 16:23:19.131000 audit: BPF prog-id=162 op=LOAD Jun 25 16:23:19.131000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4062 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633306536336433356536653662663838646364663061393165356332 Jun 25 16:23:19.131000 audit: BPF prog-id=163 op=LOAD Jun 25 16:23:19.131000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4062 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633306536336433356536653662663838646364663061393165356332 Jun 25 16:23:19.131000 audit: BPF prog-id=163 op=UNLOAD Jun 25 16:23:19.131000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:23:19.131000 audit: BPF prog-id=164 op=LOAD Jun 25 16:23:19.131000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4062 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633306536336433356536653662663838646364663061393165356332 Jun 25 16:23:19.146001 containerd[1339]: time="2024-06-25T16:23:19.145980634Z" level=info msg="StartContainer for \"f30e63d35e6e6bf88dcdf0a91e5c2e3145a46b618189c092401db4f85475d599\" returns successfully" Jun 25 16:23:19.152839 containerd[1339]: time="2024-06-25T16:23:19.147587645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:23:19.414654 systemd-networkd[1152]: cali040e8adbb19: Gained IPv6LL Jun 25 16:23:19.473742 containerd[1339]: time="2024-06-25T16:23:19.473715647Z" level=info msg="StopPodSandbox for \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\"" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.522 [INFO][4164] k8s.go 608: Cleaning up netns ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.522 [INFO][4164] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" iface="eth0" netns="/var/run/netns/cni-326d4687-2926-6d4f-9b2f-0074a7f227bb" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.522 [INFO][4164] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" iface="eth0" netns="/var/run/netns/cni-326d4687-2926-6d4f-9b2f-0074a7f227bb" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.522 [INFO][4164] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" iface="eth0" netns="/var/run/netns/cni-326d4687-2926-6d4f-9b2f-0074a7f227bb" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.522 [INFO][4164] k8s.go 615: Releasing IP address(es) ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.522 [INFO][4164] utils.go 188: Calico CNI releasing IP address ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.538 [INFO][4170] ipam_plugin.go 411: Releasing address using handleID ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.538 [INFO][4170] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.538 [INFO][4170] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.542 [WARNING][4170] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.542 [INFO][4170] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.543 [INFO][4170] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:19.545300 containerd[1339]: 2024-06-25 16:23:19.544 [INFO][4164] k8s.go 621: Teardown processing complete. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:19.548681 containerd[1339]: time="2024-06-25T16:23:19.546947925Z" level=info msg="TearDown network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\" successfully" Jun 25 16:23:19.548681 containerd[1339]: time="2024-06-25T16:23:19.546969404Z" level=info msg="StopPodSandbox for \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\" returns successfully" Jun 25 16:23:19.548681 containerd[1339]: time="2024-06-25T16:23:19.548571647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6ww22,Uid:90ecef2d-85e5-4ada-bc6e-0ed9c7763599,Namespace:kube-system,Attempt:1,}" Jun 25 16:23:19.546896 systemd[1]: run-netns-cni\x2d326d4687\x2d2926\x2d6d4f\x2d9b2f\x2d0074a7f227bb.mount: Deactivated successfully. Jun 25 16:23:19.638246 systemd-networkd[1152]: cali53e6d4e6ad0: Link UP Jun 25 16:23:19.640044 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:23:19.640252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali53e6d4e6ad0: link becomes ready Jun 25 16:23:19.640178 systemd-networkd[1152]: cali53e6d4e6ad0: Gained carrier Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.587 [INFO][4177] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0 coredns-7db6d8ff4d- kube-system 90ecef2d-85e5-4ada-bc6e-0ed9c7763599 726 0 2024-06-25 16:22:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-6ww22 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53e6d4e6ad0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.587 [INFO][4177] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.613 [INFO][4189] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" HandleID="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.619 [INFO][4189] ipam_plugin.go 264: Auto assigning IP ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" HandleID="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002912a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-6ww22", "timestamp":"2024-06-25 16:23:19.6133277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.619 [INFO][4189] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.619 [INFO][4189] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.619 [INFO][4189] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.620 [INFO][4189] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.622 [INFO][4189] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.624 [INFO][4189] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.625 [INFO][4189] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.626 [INFO][4189] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.626 [INFO][4189] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.627 [INFO][4189] ipam.go 1685: Creating new handle: k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419 Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.629 [INFO][4189] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.635 [INFO][4189] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.635 [INFO][4189] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" host="localhost" Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.635 [INFO][4189] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:19.653313 containerd[1339]: 2024-06-25 16:23:19.635 [INFO][4189] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" HandleID="k8s-pod-network.63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.653812 containerd[1339]: 2024-06-25 16:23:19.636 [INFO][4177] k8s.go 386: Populated endpoint ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"90ecef2d-85e5-4ada-bc6e-0ed9c7763599", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-6ww22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53e6d4e6ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:19.653812 containerd[1339]: 2024-06-25 16:23:19.636 [INFO][4177] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.653812 containerd[1339]: 2024-06-25 16:23:19.636 [INFO][4177] dataplane_linux.go 68: Setting the host side veth name to cali53e6d4e6ad0 ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.653812 containerd[1339]: 2024-06-25 16:23:19.638 [INFO][4177] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.653812 containerd[1339]: 2024-06-25 16:23:19.640 [INFO][4177] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"90ecef2d-85e5-4ada-bc6e-0ed9c7763599", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419", Pod:"coredns-7db6d8ff4d-6ww22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53e6d4e6ad0", MAC:"62:e1:e7:0d:76:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:19.653812 containerd[1339]: 2024-06-25 16:23:19.652 [INFO][4177] k8s.go 500: Wrote updated endpoint to datastore ContainerID="63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6ww22" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:19.658000 audit[4209]: NETFILTER_CFG table=filter:108 family=2 entries=34 op=nft_register_chain pid=4209 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:19.658000 audit[4209]: SYSCALL arch=c000003e syscall=46 success=yes exit=18204 a0=3 a1=7ffc88052020 a2=0 a3=7ffc8805200c items=0 ppid=3497 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.658000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:19.696544 containerd[1339]: time="2024-06-25T16:23:19.696433840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:19.696787 containerd[1339]: time="2024-06-25T16:23:19.696468910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:19.696787 containerd[1339]: time="2024-06-25T16:23:19.696692353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:19.696787 containerd[1339]: time="2024-06-25T16:23:19.696702562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:19.718580 systemd[1]: Started cri-containerd-63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419.scope - libcontainer container 63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419. Jun 25 16:23:19.723000 audit: BPF prog-id=165 op=LOAD Jun 25 16:23:19.723000 audit: BPF prog-id=166 op=LOAD Jun 25 16:23:19.723000 audit[4228]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4218 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633646433636232316639653766393638643035343561316635343336 Jun 25 16:23:19.723000 audit: BPF prog-id=167 op=LOAD Jun 25 16:23:19.723000 audit[4228]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4218 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633646433636232316639653766393638643035343561316635343336 Jun 25 16:23:19.723000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:23:19.723000 audit: BPF prog-id=166 op=UNLOAD Jun 25 16:23:19.723000 audit: BPF prog-id=168 op=LOAD Jun 25 16:23:19.723000 audit[4228]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4218 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633646433636232316639653766393638643035343561316635343336 Jun 25 16:23:19.725438 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:23:19.746003 containerd[1339]: time="2024-06-25T16:23:19.745978893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6ww22,Uid:90ecef2d-85e5-4ada-bc6e-0ed9c7763599,Namespace:kube-system,Attempt:1,} returns sandbox id \"63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419\"" Jun 25 16:23:19.748020 containerd[1339]: time="2024-06-25T16:23:19.747820986Z" level=info msg="CreateContainer within sandbox \"63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:23:19.825028 containerd[1339]: time="2024-06-25T16:23:19.824987458Z" level=info msg="CreateContainer within sandbox \"63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80138593f2879cdf88824455b6dd7a3e0cfbab488d51d14f1b59861d18847186\"" Jun 25 16:23:19.825386 containerd[1339]: time="2024-06-25T16:23:19.825368818Z" level=info msg="StartContainer for \"80138593f2879cdf88824455b6dd7a3e0cfbab488d51d14f1b59861d18847186\"" Jun 25 16:23:19.847602 systemd[1]: Started cri-containerd-80138593f2879cdf88824455b6dd7a3e0cfbab488d51d14f1b59861d18847186.scope - libcontainer container 80138593f2879cdf88824455b6dd7a3e0cfbab488d51d14f1b59861d18847186. Jun 25 16:23:19.853000 audit: BPF prog-id=169 op=LOAD Jun 25 16:23:19.853000 audit: BPF prog-id=170 op=LOAD Jun 25 16:23:19.853000 audit[4259]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4218 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830313338353933663238373963646638383832343435356236646437 Jun 25 16:23:19.853000 audit: BPF prog-id=171 op=LOAD Jun 25 16:23:19.853000 audit[4259]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4218 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830313338353933663238373963646638383832343435356236646437 Jun 25 16:23:19.853000 audit: BPF prog-id=171 op=UNLOAD Jun 25 16:23:19.853000 audit: BPF prog-id=170 op=UNLOAD Jun 25 16:23:19.853000 audit: BPF prog-id=172 op=LOAD Jun 25 16:23:19.853000 audit[4259]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4218 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:19.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830313338353933663238373963646638383832343435356236646437 Jun 25 16:23:19.862861 containerd[1339]: time="2024-06-25T16:23:19.862838894Z" level=info msg="StartContainer for \"80138593f2879cdf88824455b6dd7a3e0cfbab488d51d14f1b59861d18847186\" returns successfully" Jun 25 16:23:20.468244 containerd[1339]: time="2024-06-25T16:23:20.468215157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:20.469166 containerd[1339]: time="2024-06-25T16:23:20.469133727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:23:20.469596 containerd[1339]: time="2024-06-25T16:23:20.469577591Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:20.470439 containerd[1339]: time="2024-06-25T16:23:20.470417934Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:20.471342 containerd[1339]: time="2024-06-25T16:23:20.471326483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:20.471975 containerd[1339]: time="2024-06-25T16:23:20.471948913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.324325137s" Jun 25 16:23:20.472047 containerd[1339]: time="2024-06-25T16:23:20.472035285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:23:20.473915 containerd[1339]: time="2024-06-25T16:23:20.473887677Z" level=info msg="CreateContainer within sandbox \"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:23:20.479456 containerd[1339]: time="2024-06-25T16:23:20.479438191Z" level=info msg="CreateContainer within sandbox \"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a6c7f1f576d648f59b4d3ff5e2d56f7e6b3444a55c2ca51376b4b320e1e526b8\"" Jun 25 16:23:20.480434 containerd[1339]: time="2024-06-25T16:23:20.480414547Z" level=info msg="StartContainer for \"a6c7f1f576d648f59b4d3ff5e2d56f7e6b3444a55c2ca51376b4b320e1e526b8\"" Jun 25 16:23:20.500595 systemd[1]: Started cri-containerd-a6c7f1f576d648f59b4d3ff5e2d56f7e6b3444a55c2ca51376b4b320e1e526b8.scope - libcontainer container a6c7f1f576d648f59b4d3ff5e2d56f7e6b3444a55c2ca51376b4b320e1e526b8. Jun 25 16:23:20.508679 kernel: kauditd_printk_skb: 109 callbacks suppressed Jun 25 16:23:20.508745 kernel: audit: type=1334 audit(1719332600.506:594): prog-id=173 op=LOAD Jun 25 16:23:20.506000 audit: BPF prog-id=173 op=LOAD Jun 25 16:23:20.506000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4062 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.509614 kernel: audit: type=1300 audit(1719332600.506:594): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4062 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136633766316635373664363438663539623464336666356532643536 Jun 25 16:23:20.511701 kernel: audit: type=1327 audit(1719332600.506:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136633766316635373664363438663539623464336666356532643536 Jun 25 16:23:20.506000 audit: BPF prog-id=174 op=LOAD Jun 25 16:23:20.513535 kernel: audit: type=1334 audit(1719332600.506:595): prog-id=174 op=LOAD Jun 25 16:23:20.506000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4062 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.516109 kernel: audit: type=1300 audit(1719332600.506:595): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4062 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.518296 kernel: audit: type=1327 audit(1719332600.506:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136633766316635373664363438663539623464336666356532643536 Jun 25 16:23:20.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136633766316635373664363438663539623464336666356532643536 Jun 25 16:23:20.506000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:23:20.518939 kernel: audit: type=1334 audit(1719332600.506:596): prog-id=174 op=UNLOAD Jun 25 16:23:20.518975 kernel: audit: type=1334 audit(1719332600.507:597): prog-id=173 op=UNLOAD Jun 25 16:23:20.507000 audit: BPF prog-id=173 op=UNLOAD Jun 25 16:23:20.520941 containerd[1339]: time="2024-06-25T16:23:20.520919868Z" level=info msg="StartContainer for \"a6c7f1f576d648f59b4d3ff5e2d56f7e6b3444a55c2ca51376b4b320e1e526b8\" returns successfully" Jun 25 16:23:20.507000 audit: BPF prog-id=175 op=LOAD Jun 25 16:23:20.507000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4062 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.525167 kernel: audit: type=1334 audit(1719332600.507:598): prog-id=175 op=LOAD Jun 25 16:23:20.525208 kernel: audit: type=1300 audit(1719332600.507:598): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4062 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136633766316635373664363438663539623464336666356532643536 Jun 25 16:23:20.547856 systemd[1]: run-containerd-runc-k8s.io-63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419-runc.hIgozB.mount: Deactivated successfully. Jun 25 16:23:20.611475 kubelet[2453]: I0625 16:23:20.611428 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6ww22" podStartSLOduration=34.61139579 podStartE2EDuration="34.61139579s" podCreationTimestamp="2024-06-25 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:20.604727294 +0000 UTC m=+49.245768514" watchObservedRunningTime="2024-06-25 16:23:20.61139579 +0000 UTC m=+49.252437014" Jun 25 16:23:20.627779 kubelet[2453]: I0625 16:23:20.627745 2453 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:23:20.630000 audit[4329]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:20.630000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffcc9fa300 a2=0 a3=7fffcc9fa2ec items=0 ppid=2589 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:20.634258 kubelet[2453]: I0625 16:23:20.634237 2453 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:23:20.631000 audit[4329]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:20.631000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffcc9fa300 a2=0 a3=7fffcc9fa2ec items=0 ppid=2589 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:20.641000 audit[4331]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:20.641000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffeb26668a0 a2=0 a3=7ffeb266688c items=0 ppid=2589 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:20.649000 audit[4331]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:20.649000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffeb26668a0 a2=0 a3=7ffeb266688c items=0 ppid=2589 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:21.240445 kubelet[2453]: I0625 16:23:21.240292 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:23:21.257811 systemd[1]: run-containerd-runc-k8s.io-60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb-runc.o3MwSl.mount: Deactivated successfully. Jun 25 16:23:21.305616 kubelet[2453]: I0625 16:23:21.305579 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8kv5x" podStartSLOduration=26.594470736 podStartE2EDuration="29.30556816s" podCreationTimestamp="2024-06-25 16:22:52 +0000 UTC" firstStartedPulling="2024-06-25 16:23:17.761451959 +0000 UTC m=+46.402493175" lastFinishedPulling="2024-06-25 16:23:20.472549379 +0000 UTC m=+49.113590599" observedRunningTime="2024-06-25 16:23:20.623574876 +0000 UTC m=+49.264616104" watchObservedRunningTime="2024-06-25 16:23:21.30556816 +0000 UTC m=+49.946609381" Jun 25 16:23:21.526635 systemd-networkd[1152]: cali53e6d4e6ad0: Gained IPv6LL Jun 25 16:23:21.547452 systemd[1]: run-containerd-runc-k8s.io-60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb-runc.TOPSYU.mount: Deactivated successfully. Jun 25 16:23:22.555000 audit[4379]: NETFILTER_CFG table=filter:113 family=2 entries=9 op=nft_register_rule pid=4379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:22.555000 audit[4379]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffb02bf3a0 a2=0 a3=7fffb02bf38c items=0 ppid=2589 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:22.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:22.556000 audit[4379]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:22.556000 audit[4379]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffb02bf3a0 a2=0 a3=7fffb02bf38c items=0 ppid=2589 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:22.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:22.626873 kubelet[2453]: I0625 16:23:22.626828 2453 topology_manager.go:215] "Topology Admit Handler" podUID="8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5" podNamespace="calico-apiserver" podName="calico-apiserver-7464bbdf78-6dwvh" Jun 25 16:23:22.640222 kubelet[2453]: I0625 16:23:22.640185 2453 topology_manager.go:215] "Topology Admit Handler" podUID="6f70ead8-4bbf-45d9-859d-5077129262d8" podNamespace="calico-apiserver" podName="calico-apiserver-7464bbdf78-tjtqh" Jun 25 16:23:22.648687 systemd[1]: Created slice kubepods-besteffort-pod8fa19dcd_75ae_4d88_9e0a_67cbed5a62e5.slice - libcontainer container kubepods-besteffort-pod8fa19dcd_75ae_4d88_9e0a_67cbed5a62e5.slice. Jun 25 16:23:22.652704 systemd[1]: Created slice kubepods-besteffort-pod6f70ead8_4bbf_45d9_859d_5077129262d8.slice - libcontainer container kubepods-besteffort-pod6f70ead8_4bbf_45d9_859d_5077129262d8.slice. Jun 25 16:23:22.720298 kubelet[2453]: I0625 16:23:22.720268 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5-calico-apiserver-certs\") pod \"calico-apiserver-7464bbdf78-6dwvh\" (UID: \"8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5\") " pod="calico-apiserver/calico-apiserver-7464bbdf78-6dwvh" Jun 25 16:23:22.720507 kubelet[2453]: I0625 16:23:22.720494 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45p7z\" (UniqueName: \"kubernetes.io/projected/8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5-kube-api-access-45p7z\") pod \"calico-apiserver-7464bbdf78-6dwvh\" (UID: \"8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5\") " pod="calico-apiserver/calico-apiserver-7464bbdf78-6dwvh" Jun 25 16:23:22.821690 kubelet[2453]: I0625 16:23:22.821607 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f70ead8-4bbf-45d9-859d-5077129262d8-calico-apiserver-certs\") pod \"calico-apiserver-7464bbdf78-tjtqh\" (UID: \"6f70ead8-4bbf-45d9-859d-5077129262d8\") " pod="calico-apiserver/calico-apiserver-7464bbdf78-tjtqh" Jun 25 16:23:22.821833 kubelet[2453]: I0625 16:23:22.821822 2453 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbpzb\" (UniqueName: \"kubernetes.io/projected/6f70ead8-4bbf-45d9-859d-5077129262d8-kube-api-access-fbpzb\") pod \"calico-apiserver-7464bbdf78-tjtqh\" (UID: \"6f70ead8-4bbf-45d9-859d-5077129262d8\") " pod="calico-apiserver/calico-apiserver-7464bbdf78-tjtqh" Jun 25 16:23:22.823863 kubelet[2453]: E0625 16:23:22.823848 2453 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:23:22.823982 kubelet[2453]: E0625 16:23:22.823972 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5-calico-apiserver-certs podName:8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5 nodeName:}" failed. No retries permitted until 2024-06-25 16:23:23.323956764 +0000 UTC m=+51.964997984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5-calico-apiserver-certs") pod "calico-apiserver-7464bbdf78-6dwvh" (UID: "8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5") : secret "calico-apiserver-certs" not found Jun 25 16:23:22.922411 kubelet[2453]: E0625 16:23:22.922387 2453 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:23:22.922597 kubelet[2453]: E0625 16:23:22.922587 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f70ead8-4bbf-45d9-859d-5077129262d8-calico-apiserver-certs podName:6f70ead8-4bbf-45d9-859d-5077129262d8 nodeName:}" failed. No retries permitted until 2024-06-25 16:23:23.422575283 +0000 UTC m=+52.063616509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6f70ead8-4bbf-45d9-859d-5077129262d8-calico-apiserver-certs") pod "calico-apiserver-7464bbdf78-tjtqh" (UID: "6f70ead8-4bbf-45d9-859d-5077129262d8") : secret "calico-apiserver-certs" not found Jun 25 16:23:23.424310 kubelet[2453]: E0625 16:23:23.424278 2453 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:23:23.424425 kubelet[2453]: E0625 16:23:23.424328 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5-calico-apiserver-certs podName:8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5 nodeName:}" failed. No retries permitted until 2024-06-25 16:23:24.42431737 +0000 UTC m=+53.065358591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5-calico-apiserver-certs") pod "calico-apiserver-7464bbdf78-6dwvh" (UID: "8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5") : secret "calico-apiserver-certs" not found Jun 25 16:23:23.424512 kubelet[2453]: E0625 16:23:23.424278 2453 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:23:23.424589 kubelet[2453]: E0625 16:23:23.424582 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f70ead8-4bbf-45d9-859d-5077129262d8-calico-apiserver-certs podName:6f70ead8-4bbf-45d9-859d-5077129262d8 nodeName:}" failed. No retries permitted until 2024-06-25 16:23:24.424564872 +0000 UTC m=+53.065606090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6f70ead8-4bbf-45d9-859d-5077129262d8-calico-apiserver-certs") pod "calico-apiserver-7464bbdf78-tjtqh" (UID: "6f70ead8-4bbf-45d9-859d-5077129262d8") : secret "calico-apiserver-certs" not found Jun 25 16:23:23.567000 audit[4384]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:23.567000 audit[4384]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffd31f9970 a2=0 a3=7fffd31f995c items=0 ppid=2589 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:23.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:23.567000 audit[4384]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:23.567000 audit[4384]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd31f9970 a2=0 a3=7fffd31f995c items=0 ppid=2589 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:23.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:24.451257 containerd[1339]: time="2024-06-25T16:23:24.451178865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7464bbdf78-6dwvh,Uid:8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:23:24.455494 containerd[1339]: time="2024-06-25T16:23:24.455461989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7464bbdf78-tjtqh,Uid:6f70ead8-4bbf-45d9-859d-5077129262d8,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:23:24.551155 systemd-networkd[1152]: calibe6497e244b: Link UP Jun 25 16:23:24.554389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:23:24.554433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibe6497e244b: link becomes ready Jun 25 16:23:24.554234 systemd-networkd[1152]: calibe6497e244b: Gained carrier Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.494 [INFO][4387] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0 calico-apiserver-7464bbdf78- calico-apiserver 6f70ead8-4bbf-45d9-859d-5077129262d8 802 0 2024-06-25 16:23:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7464bbdf78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7464bbdf78-tjtqh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe6497e244b [] []}} ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.494 [INFO][4387] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.522 [INFO][4414] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" HandleID="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Workload="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.526 [INFO][4414] ipam_plugin.go 264: Auto assigning IP ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" HandleID="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Workload="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5b60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7464bbdf78-tjtqh", "timestamp":"2024-06-25 16:23:24.522035347 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.527 [INFO][4414] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.527 [INFO][4414] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.527 [INFO][4414] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.527 [INFO][4414] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.529 [INFO][4414] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.531 [INFO][4414] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.532 [INFO][4414] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.535 [INFO][4414] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.535 [INFO][4414] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.536 [INFO][4414] ipam.go 1685: Creating new handle: k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59 Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.539 [INFO][4414] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.546 [INFO][4414] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.546 [INFO][4414] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" host="localhost" Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.546 [INFO][4414] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:24.562423 containerd[1339]: 2024-06-25 16:23:24.546 [INFO][4414] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" HandleID="k8s-pod-network.58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Workload="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" Jun 25 16:23:24.565099 containerd[1339]: 2024-06-25 16:23:24.547 [INFO][4387] k8s.go 386: Populated endpoint ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0", GenerateName:"calico-apiserver-7464bbdf78-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f70ead8-4bbf-45d9-859d-5077129262d8", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7464bbdf78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7464bbdf78-tjtqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe6497e244b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:24.565099 containerd[1339]: 2024-06-25 16:23:24.547 [INFO][4387] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" Jun 25 16:23:24.565099 containerd[1339]: 2024-06-25 16:23:24.547 [INFO][4387] dataplane_linux.go 68: Setting the host side veth name to calibe6497e244b ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" Jun 25 16:23:24.565099 containerd[1339]: 2024-06-25 16:23:24.554 [INFO][4387] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" Jun 25 16:23:24.565099 containerd[1339]: 2024-06-25 16:23:24.555 [INFO][4387] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0", GenerateName:"calico-apiserver-7464bbdf78-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f70ead8-4bbf-45d9-859d-5077129262d8", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7464bbdf78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59", Pod:"calico-apiserver-7464bbdf78-tjtqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe6497e244b", MAC:"2e:87:eb:a7:0d:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:24.565099 containerd[1339]: 2024-06-25 16:23:24.560 [INFO][4387] k8s.go 500: Wrote updated endpoint to datastore ContainerID="58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-tjtqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--tjtqh-eth0" Jun 25 16:23:24.575000 audit[4438]: NETFILTER_CFG table=filter:117 family=2 entries=51 op=nft_register_chain pid=4438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:24.575000 audit[4438]: SYSCALL arch=c000003e syscall=46 success=yes exit=26260 a0=3 a1=7ffe42ff1670 a2=0 a3=7ffe42ff165c items=0 ppid=3497 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.575000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:24.583970 systemd-networkd[1152]: cali59b85158c19: Link UP Jun 25 16:23:24.585572 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali59b85158c19: link becomes ready Jun 25 16:23:24.585667 systemd-networkd[1152]: cali59b85158c19: Gained carrier Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.512 [INFO][4391] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0 calico-apiserver-7464bbdf78- calico-apiserver 8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5 800 0 2024-06-25 16:23:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7464bbdf78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7464bbdf78-6dwvh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59b85158c19 [] []}} ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.512 [INFO][4391] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.542 [INFO][4421] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" HandleID="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Workload="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.555 [INFO][4421] ipam_plugin.go 264: Auto assigning IP ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" HandleID="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Workload="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7464bbdf78-6dwvh", "timestamp":"2024-06-25 16:23:24.542957323 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.557 [INFO][4421] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.557 [INFO][4421] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.557 [INFO][4421] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.560 [INFO][4421] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.569 [INFO][4421] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.572 [INFO][4421] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.573 [INFO][4421] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.574 [INFO][4421] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.574 [INFO][4421] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.575 [INFO][4421] ipam.go 1685: Creating new handle: k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43 Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.577 [INFO][4421] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.581 [INFO][4421] ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.581 [INFO][4421] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" host="localhost" Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.581 [INFO][4421] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:24.595092 containerd[1339]: 2024-06-25 16:23:24.581 [INFO][4421] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" HandleID="k8s-pod-network.aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Workload="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" Jun 25 16:23:24.595829 containerd[1339]: 2024-06-25 16:23:24.582 [INFO][4391] k8s.go 386: Populated endpoint ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0", GenerateName:"calico-apiserver-7464bbdf78-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7464bbdf78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7464bbdf78-6dwvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59b85158c19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:24.595829 containerd[1339]: 2024-06-25 16:23:24.582 [INFO][4391] k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" Jun 25 16:23:24.595829 containerd[1339]: 2024-06-25 16:23:24.582 [INFO][4391] dataplane_linux.go 68: Setting the host side veth name to cali59b85158c19 ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" Jun 25 16:23:24.595829 containerd[1339]: 2024-06-25 16:23:24.585 [INFO][4391] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" Jun 25 16:23:24.595829 containerd[1339]: 2024-06-25 16:23:24.587 [INFO][4391] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0", GenerateName:"calico-apiserver-7464bbdf78-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7464bbdf78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43", Pod:"calico-apiserver-7464bbdf78-6dwvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59b85158c19", MAC:"9e:b8:13:cf:51:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:24.595829 containerd[1339]: 2024-06-25 16:23:24.593 [INFO][4391] k8s.go 500: Wrote updated endpoint to datastore ContainerID="aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43" Namespace="calico-apiserver" Pod="calico-apiserver-7464bbdf78-6dwvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7464bbdf78--6dwvh-eth0" Jun 25 16:23:24.604000 audit[4466]: NETFILTER_CFG table=filter:118 family=2 entries=51 op=nft_register_chain pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:24.604000 audit[4466]: SYSCALL arch=c000003e syscall=46 success=yes exit=25948 a0=3 a1=7ffcb4b7b340 a2=0 a3=7ffcb4b7b32c items=0 ppid=3497 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.604000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:24.608102 containerd[1339]: time="2024-06-25T16:23:24.608016355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:24.608197 containerd[1339]: time="2024-06-25T16:23:24.608176571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:24.608264 containerd[1339]: time="2024-06-25T16:23:24.608251467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:24.608317 containerd[1339]: time="2024-06-25T16:23:24.608305376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:24.618157 containerd[1339]: time="2024-06-25T16:23:24.618035585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:24.618157 containerd[1339]: time="2024-06-25T16:23:24.618092558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:24.618157 containerd[1339]: time="2024-06-25T16:23:24.618102841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:24.618157 containerd[1339]: time="2024-06-25T16:23:24.618108595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:24.629642 systemd[1]: Started cri-containerd-58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59.scope - libcontainer container 58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59. Jun 25 16:23:24.631340 systemd[1]: Started cri-containerd-aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43.scope - libcontainer container aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43. Jun 25 16:23:24.639000 audit: BPF prog-id=176 op=LOAD Jun 25 16:23:24.640000 audit: BPF prog-id=177 op=LOAD Jun 25 16:23:24.640000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4485 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161353139653431646234656235643164333231643036313361386465 Jun 25 16:23:24.640000 audit: BPF prog-id=178 op=LOAD Jun 25 16:23:24.640000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4485 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161353139653431646234656235643164333231643036313361386465 Jun 25 16:23:24.640000 audit: BPF prog-id=178 op=UNLOAD Jun 25 16:23:24.640000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:23:24.640000 audit: BPF prog-id=179 op=LOAD Jun 25 16:23:24.640000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4485 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161353139653431646234656235643164333231643036313361386465 Jun 25 16:23:24.641000 audit: BPF prog-id=180 op=LOAD Jun 25 16:23:24.642000 audit: BPF prog-id=181 op=LOAD Jun 25 16:23:24.642000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4465 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663164636233366134616337383236636263366365653133333639 Jun 25 16:23:24.642000 audit: BPF prog-id=182 op=LOAD Jun 25 16:23:24.642000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4465 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663164636233366134616337383236636263366365653133333639 Jun 25 16:23:24.642000 audit: BPF prog-id=182 op=UNLOAD Jun 25 16:23:24.642000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:23:24.642000 audit: BPF prog-id=183 op=LOAD Jun 25 16:23:24.642000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4465 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:24.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663164636233366134616337383236636263366365653133333639 Jun 25 16:23:24.643798 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:23:24.647540 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:23:24.690083 containerd[1339]: time="2024-06-25T16:23:24.690056837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7464bbdf78-6dwvh,Uid:8fa19dcd-75ae-4d88-9e0a-67cbed5a62e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43\"" Jun 25 16:23:24.693893 containerd[1339]: time="2024-06-25T16:23:24.693871628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:23:24.695074 containerd[1339]: time="2024-06-25T16:23:24.695057603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7464bbdf78-tjtqh,Uid:6f70ead8-4bbf-45d9-859d-5077129262d8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59\"" Jun 25 16:23:26.262622 systemd-networkd[1152]: calibe6497e244b: Gained IPv6LL Jun 25 16:23:26.454558 systemd-networkd[1152]: cali59b85158c19: Gained IPv6LL Jun 25 16:23:26.844446 containerd[1339]: time="2024-06-25T16:23:26.844416042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:26.845468 containerd[1339]: time="2024-06-25T16:23:26.845439383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:23:26.845870 containerd[1339]: time="2024-06-25T16:23:26.845854685Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:26.846937 containerd[1339]: time="2024-06-25T16:23:26.846915656Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:26.848059 containerd[1339]: time="2024-06-25T16:23:26.848030699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:26.848764 containerd[1339]: time="2024-06-25T16:23:26.848729197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.154734619s" Jun 25 16:23:26.849187 containerd[1339]: time="2024-06-25T16:23:26.848765607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:23:26.850786 containerd[1339]: time="2024-06-25T16:23:26.850767945Z" level=info msg="CreateContainer within sandbox \"aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:23:26.852049 containerd[1339]: time="2024-06-25T16:23:26.851056359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:23:26.859967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994640420.mount: Deactivated successfully. Jun 25 16:23:26.868533 containerd[1339]: time="2024-06-25T16:23:26.868502945Z" level=info msg="CreateContainer within sandbox \"aa519e41db4eb5d1d321d0613a8de9289093327c0895681414e8c1cdac816f43\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9ba3c2c626fa7da5ab74417f474e7f8af3e3173ebc58a832f644af1fef9455ba\"" Jun 25 16:23:26.869164 containerd[1339]: time="2024-06-25T16:23:26.869145178Z" level=info msg="StartContainer for \"9ba3c2c626fa7da5ab74417f474e7f8af3e3173ebc58a832f644af1fef9455ba\"" Jun 25 16:23:26.896582 systemd[1]: Started cri-containerd-9ba3c2c626fa7da5ab74417f474e7f8af3e3173ebc58a832f644af1fef9455ba.scope - libcontainer container 9ba3c2c626fa7da5ab74417f474e7f8af3e3173ebc58a832f644af1fef9455ba. Jun 25 16:23:26.902000 audit: BPF prog-id=184 op=LOAD Jun 25 16:23:26.904803 kernel: kauditd_printk_skb: 55 callbacks suppressed Jun 25 16:23:26.904836 kernel: audit: type=1334 audit(1719332606.902:621): prog-id=184 op=LOAD Jun 25 16:23:26.904000 audit: BPF prog-id=185 op=LOAD Jun 25 16:23:26.906268 kernel: audit: type=1334 audit(1719332606.904:622): prog-id=185 op=LOAD Jun 25 16:23:26.906298 kernel: audit: type=1300 audit(1719332606.904:622): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4485 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:26.904000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4485 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:26.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962613363326336323666613764613561623734343137663437346537 Jun 25 16:23:26.910265 kernel: audit: type=1327 audit(1719332606.904:622): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962613363326336323666613764613561623734343137663437346537 Jun 25 16:23:26.910295 kernel: audit: type=1334 audit(1719332606.904:623): prog-id=186 op=LOAD Jun 25 16:23:26.904000 audit: BPF prog-id=186 op=LOAD Jun 25 16:23:26.910800 kernel: audit: type=1300 audit(1719332606.904:623): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4485 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:26.904000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4485 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:26.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962613363326336323666613764613561623734343137663437346537 Jun 25 16:23:26.915121 kernel: audit: type=1327 audit(1719332606.904:623): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962613363326336323666613764613561623734343137663437346537 Jun 25 16:23:26.915152 kernel: audit: type=1334 audit(1719332606.904:624): prog-id=186 op=UNLOAD Jun 25 16:23:26.915687 kernel: audit: type=1334 audit(1719332606.904:625): prog-id=185 op=UNLOAD Jun 25 16:23:26.904000 audit: BPF prog-id=186 op=UNLOAD Jun 25 16:23:26.904000 audit: BPF prog-id=185 op=UNLOAD Jun 25 16:23:26.904000 audit: BPF prog-id=187 op=LOAD Jun 25 16:23:26.916656 kernel: audit: type=1334 audit(1719332606.904:626): prog-id=187 op=LOAD Jun 25 16:23:26.904000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4485 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:26.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962613363326336323666613764613561623734343137663437346537 Jun 25 16:23:26.937441 containerd[1339]: time="2024-06-25T16:23:26.937413577Z" level=info msg="StartContainer for \"9ba3c2c626fa7da5ab74417f474e7f8af3e3173ebc58a832f644af1fef9455ba\" returns successfully" Jun 25 16:23:27.240385 containerd[1339]: time="2024-06-25T16:23:27.240352647Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:27.240845 containerd[1339]: time="2024-06-25T16:23:27.240815645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 16:23:27.240948 containerd[1339]: time="2024-06-25T16:23:27.240933642Z" level=info msg="ImageUpdate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:27.241750 containerd[1339]: time="2024-06-25T16:23:27.241737033Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:27.242933 containerd[1339]: time="2024-06-25T16:23:27.242848322Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:27.244572 containerd[1339]: time="2024-06-25T16:23:27.244545428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 392.389863ms" Jun 25 16:23:27.244647 containerd[1339]: time="2024-06-25T16:23:27.244635512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:23:27.246368 containerd[1339]: time="2024-06-25T16:23:27.246352103Z" level=info msg="CreateContainer within sandbox \"58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:23:27.254765 containerd[1339]: time="2024-06-25T16:23:27.254736317Z" level=info msg="CreateContainer within sandbox \"58f1dcb36a4ac7826cbc6cee13369411f2df495e98a8c662e83452227dd7ab59\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"71757a2fa334983863c3aeb62508f7d8d37792743dd46645ca2f365de6243791\"" Jun 25 16:23:27.255137 containerd[1339]: time="2024-06-25T16:23:27.255123987Z" level=info msg="StartContainer for \"71757a2fa334983863c3aeb62508f7d8d37792743dd46645ca2f365de6243791\"" Jun 25 16:23:27.271890 systemd[1]: Started cri-containerd-71757a2fa334983863c3aeb62508f7d8d37792743dd46645ca2f365de6243791.scope - libcontainer container 71757a2fa334983863c3aeb62508f7d8d37792743dd46645ca2f365de6243791. Jun 25 16:23:27.291000 audit: BPF prog-id=188 op=LOAD Jun 25 16:23:27.292000 audit: BPF prog-id=189 op=LOAD Jun 25 16:23:27.292000 audit[4594]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4465 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:27.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731373537613266613333343938333836336333616562363235303866 Jun 25 16:23:27.292000 audit: BPF prog-id=190 op=LOAD Jun 25 16:23:27.292000 audit[4594]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4465 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:27.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731373537613266613333343938333836336333616562363235303866 Jun 25 16:23:27.292000 audit: BPF prog-id=190 op=UNLOAD Jun 25 16:23:27.292000 audit: BPF prog-id=189 op=UNLOAD Jun 25 16:23:27.292000 audit: BPF prog-id=191 op=LOAD Jun 25 16:23:27.292000 audit[4594]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4465 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:27.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731373537613266613333343938333836336333616562363235303866 Jun 25 16:23:27.320785 containerd[1339]: time="2024-06-25T16:23:27.320761587Z" level=info msg="StartContainer for \"71757a2fa334983863c3aeb62508f7d8d37792743dd46645ca2f365de6243791\" returns successfully" Jun 25 16:23:27.647664 kubelet[2453]: I0625 16:23:27.647556 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7464bbdf78-tjtqh" podStartSLOduration=3.097816726 podStartE2EDuration="5.647540468s" podCreationTimestamp="2024-06-25 16:23:22 +0000 UTC" firstStartedPulling="2024-06-25 16:23:24.69583248 +0000 UTC m=+53.336873699" lastFinishedPulling="2024-06-25 16:23:27.245556222 +0000 UTC m=+55.886597441" observedRunningTime="2024-06-25 16:23:27.639221727 +0000 UTC m=+56.280262956" watchObservedRunningTime="2024-06-25 16:23:27.647540468 +0000 UTC m=+56.288581691" Jun 25 16:23:27.679000 audit[4625]: NETFILTER_CFG table=filter:119 family=2 entries=10 op=nft_register_rule pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:27.679000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffcecc1610 a2=0 a3=7fffcecc15fc items=0 ppid=2589 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:27.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:27.680000 audit[4625]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:27.680000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcecc1610 a2=0 a3=7fffcecc15fc items=0 ppid=2589 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:27.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:27.716000 audit[4627]: NETFILTER_CFG table=filter:121 family=2 entries=10 op=nft_register_rule pid=4627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:27.716000 audit[4627]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc23259060 a2=0 a3=7ffc2325904c items=0 ppid=2589 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:27.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:27.720000 audit[4627]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:27.720000 audit[4627]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc23259060 a2=0 a3=7ffc2325904c items=0 ppid=2589 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:27.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:27.856649 systemd[1]: run-containerd-runc-k8s.io-9ba3c2c626fa7da5ab74417f474e7f8af3e3173ebc58a832f644af1fef9455ba-runc.VTXFMn.mount: Deactivated successfully. Jun 25 16:23:28.205000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.205000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0028e6210 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:28.205000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:28.205000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.205000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0032ea6e0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:28.205000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:28.617133 kubelet[2453]: I0625 16:23:28.617071 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:23:28.617461 kubelet[2453]: I0625 16:23:28.617424 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:23:28.889000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=1041980 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.889000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.889000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c00f82e900 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:23:28.889000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:23:28.889000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=1041986 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.889000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c00f82e930 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:23:28.889000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:23:28.889000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c009ee91a0 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:23:28.889000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:23:28.904000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.904000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c0096b4fe0 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:23:28.904000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:23:28.909000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.909000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00f82eae0 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:23:28.909000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:23:28.909000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:28.909000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00998c2a0 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:23:28.909000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:23:30.391757 kubelet[2453]: I0625 16:23:30.391712 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:23:30.462770 kubelet[2453]: I0625 16:23:30.462738 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7464bbdf78-6dwvh" podStartSLOduration=6.304948295 podStartE2EDuration="8.462713796s" podCreationTimestamp="2024-06-25 16:23:22 +0000 UTC" firstStartedPulling="2024-06-25 16:23:24.692153163 +0000 UTC m=+53.333194382" lastFinishedPulling="2024-06-25 16:23:26.849918663 +0000 UTC m=+55.490959883" observedRunningTime="2024-06-25 16:23:27.648323715 +0000 UTC m=+56.289364943" watchObservedRunningTime="2024-06-25 16:23:30.462713796 +0000 UTC m=+59.103755024" Jun 25 16:23:30.477000 audit[4644]: NETFILTER_CFG table=filter:123 family=2 entries=9 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:30.477000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe282750c0 a2=0 a3=7ffe282750ac items=0 ppid=2589 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:30.477000 audit[4644]: NETFILTER_CFG table=nat:124 family=2 entries=27 op=nft_register_chain pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:30.477000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe282750c0 a2=0 a3=7ffe282750ac items=0 ppid=2589 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:30.734077 kubelet[2453]: I0625 16:23:30.734053 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:23:30.764000 audit[4646]: NETFILTER_CFG table=filter:125 family=2 entries=8 op=nft_register_rule pid=4646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:30.764000 audit[4646]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd96738a20 a2=0 a3=7ffd96738a0c items=0 ppid=2589 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:30.764000 audit[4646]: NETFILTER_CFG table=nat:126 family=2 entries=34 op=nft_register_chain pid=4646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:30.764000 audit[4646]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd96738a20 a2=0 a3=7ffd96738a0c items=0 ppid=2589 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:30.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:31.543577 containerd[1339]: time="2024-06-25T16:23:31.542480147Z" level=info msg="StopPodSandbox for \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\"" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.706 [WARNING][4661] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8kv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ac2a4fe-1895-4a84-986f-bb41e2524a94", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f", Pod:"csi-node-driver-8kv5x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali040e8adbb19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.706 [INFO][4661] k8s.go 608: Cleaning up netns ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.706 [INFO][4661] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" iface="eth0" netns="" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.706 [INFO][4661] k8s.go 615: Releasing IP address(es) ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.706 [INFO][4661] utils.go 188: Calico CNI releasing IP address ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.974 [INFO][4671] ipam_plugin.go 411: Releasing address using handleID ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.975 [INFO][4671] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.975 [INFO][4671] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.980 [WARNING][4671] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.980 [INFO][4671] ipam_plugin.go 439: Releasing address using workloadID ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.981 [INFO][4671] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:31.985161 containerd[1339]: 2024-06-25 16:23:31.983 [INFO][4661] k8s.go 621: Teardown processing complete. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:31.985787 containerd[1339]: time="2024-06-25T16:23:31.985747358Z" level=info msg="TearDown network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\" successfully" Jun 25 16:23:31.986024 containerd[1339]: time="2024-06-25T16:23:31.985784029Z" level=info msg="StopPodSandbox for \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\" returns successfully" Jun 25 16:23:31.999785 containerd[1339]: time="2024-06-25T16:23:31.999756535Z" level=info msg="RemovePodSandbox for \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\"" Jun 25 16:23:32.020038 containerd[1339]: time="2024-06-25T16:23:32.002028189Z" level=info msg="Forcibly stopping sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\"" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.047 [WARNING][4689] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8kv5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ac2a4fe-1895-4a84-986f-bb41e2524a94", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9aefa1964cba6640c283c2c09c5757eccda8a04f91381e1438914137abc5c3f", Pod:"csi-node-driver-8kv5x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali040e8adbb19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.047 [INFO][4689] k8s.go 608: Cleaning up netns ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.047 [INFO][4689] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" iface="eth0" netns="" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.047 [INFO][4689] k8s.go 615: Releasing IP address(es) ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.047 [INFO][4689] utils.go 188: Calico CNI releasing IP address ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.074 [INFO][4695] ipam_plugin.go 411: Releasing address using handleID ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.074 [INFO][4695] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.074 [INFO][4695] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.078 [WARNING][4695] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.078 [INFO][4695] ipam_plugin.go 439: Releasing address using workloadID ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" HandleID="k8s-pod-network.22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Workload="localhost-k8s-csi--node--driver--8kv5x-eth0" Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.079 [INFO][4695] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:32.081761 containerd[1339]: 2024-06-25 16:23:32.080 [INFO][4689] k8s.go 621: Teardown processing complete. ContainerID="22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0" Jun 25 16:23:32.082158 containerd[1339]: time="2024-06-25T16:23:32.082137042Z" level=info msg="TearDown network for sandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\" successfully" Jun 25 16:23:32.085653 containerd[1339]: time="2024-06-25T16:23:32.085634908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:23:32.100199 containerd[1339]: time="2024-06-25T16:23:32.100166628Z" level=info msg="RemovePodSandbox \"22a989b1a335d66504a6e8fa6598620abf1ccfb8d76a29dc23448f8817ab81c0\" returns successfully" Jun 25 16:23:32.100648 containerd[1339]: time="2024-06-25T16:23:32.100634100Z" level=info msg="StopPodSandbox for \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\"" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.128 [WARNING][4715] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"90ecef2d-85e5-4ada-bc6e-0ed9c7763599", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419", Pod:"coredns-7db6d8ff4d-6ww22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53e6d4e6ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.129 [INFO][4715] k8s.go 608: Cleaning up netns ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.129 [INFO][4715] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" iface="eth0" netns="" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.129 [INFO][4715] k8s.go 615: Releasing IP address(es) ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.129 [INFO][4715] utils.go 188: Calico CNI releasing IP address ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.142 [INFO][4721] ipam_plugin.go 411: Releasing address using handleID ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.142 [INFO][4721] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.142 [INFO][4721] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.146 [WARNING][4721] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.147 [INFO][4721] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.147 [INFO][4721] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:32.149747 containerd[1339]: 2024-06-25 16:23:32.148 [INFO][4715] k8s.go 621: Teardown processing complete. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.150168 containerd[1339]: time="2024-06-25T16:23:32.150148269Z" level=info msg="TearDown network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\" successfully" Jun 25 16:23:32.150219 containerd[1339]: time="2024-06-25T16:23:32.150209665Z" level=info msg="StopPodSandbox for \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\" returns successfully" Jun 25 16:23:32.153722 containerd[1339]: time="2024-06-25T16:23:32.150638193Z" level=info msg="RemovePodSandbox for \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\"" Jun 25 16:23:32.153722 containerd[1339]: time="2024-06-25T16:23:32.150657351Z" level=info msg="Forcibly stopping sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\"" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.171 [WARNING][4739] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"90ecef2d-85e5-4ada-bc6e-0ed9c7763599", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63dd3cb21f9e7f968d0545a1f5436789640a7d9215277cb4fca4f64d2d06a419", Pod:"coredns-7db6d8ff4d-6ww22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53e6d4e6ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.171 [INFO][4739] k8s.go 608: Cleaning up netns ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.171 [INFO][4739] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" iface="eth0" netns="" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.171 [INFO][4739] k8s.go 615: Releasing IP address(es) ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.171 [INFO][4739] utils.go 188: Calico CNI releasing IP address ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.191 [INFO][4745] ipam_plugin.go 411: Releasing address using handleID ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.191 [INFO][4745] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.191 [INFO][4745] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.194 [WARNING][4745] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.194 [INFO][4745] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" HandleID="k8s-pod-network.a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Workload="localhost-k8s-coredns--7db6d8ff4d--6ww22-eth0" Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.195 [INFO][4745] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:32.197701 containerd[1339]: 2024-06-25 16:23:32.196 [INFO][4739] k8s.go 621: Teardown processing complete. ContainerID="a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1" Jun 25 16:23:32.198766 containerd[1339]: time="2024-06-25T16:23:32.197727995Z" level=info msg="TearDown network for sandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\" successfully" Jun 25 16:23:32.198988 containerd[1339]: time="2024-06-25T16:23:32.198964516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:23:32.199021 containerd[1339]: time="2024-06-25T16:23:32.198998086Z" level=info msg="RemovePodSandbox \"a869f35a2dfa795d8d92e63d0c5dee4e9e784ffefb4b6142eab09d5bcd7909a1\" returns successfully" Jun 25 16:23:32.199337 containerd[1339]: time="2024-06-25T16:23:32.199323907Z" level=info msg="StopPodSandbox for \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\"" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.223 [WARNING][4764] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0", GenerateName:"calico-kube-controllers-56785499b5-", Namespace:"calico-system", SelfLink:"", UID:"9455c0fc-e4a1-4704-970c-3296122d6c00", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56785499b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e", Pod:"calico-kube-controllers-56785499b5-88q2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc66418477", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.223 [INFO][4764] k8s.go 608: Cleaning up netns ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.223 [INFO][4764] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" iface="eth0" netns="" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.223 [INFO][4764] k8s.go 615: Releasing IP address(es) ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.223 [INFO][4764] utils.go 188: Calico CNI releasing IP address ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.238 [INFO][4771] ipam_plugin.go 411: Releasing address using handleID ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.238 [INFO][4771] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.239 [INFO][4771] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.244 [WARNING][4771] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.244 [INFO][4771] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.245 [INFO][4771] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:32.247950 containerd[1339]: 2024-06-25 16:23:32.246 [INFO][4764] k8s.go 621: Teardown processing complete. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.248663 containerd[1339]: time="2024-06-25T16:23:32.248618653Z" level=info msg="TearDown network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\" successfully" Jun 25 16:23:32.248737 containerd[1339]: time="2024-06-25T16:23:32.248723306Z" level=info msg="StopPodSandbox for \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\" returns successfully" Jun 25 16:23:32.249860 containerd[1339]: time="2024-06-25T16:23:32.249846548Z" level=info msg="RemovePodSandbox for \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\"" Jun 25 16:23:32.249969 containerd[1339]: time="2024-06-25T16:23:32.249933623Z" level=info msg="Forcibly stopping sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\"" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.273 [WARNING][4789] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0", GenerateName:"calico-kube-controllers-56785499b5-", Namespace:"calico-system", SelfLink:"", UID:"9455c0fc-e4a1-4704-970c-3296122d6c00", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56785499b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"903c18201802aa3a2a02628d327fd2f5645965546f2e94f5a0e7e5a12b2c741e", Pod:"calico-kube-controllers-56785499b5-88q2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc66418477", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.274 [INFO][4789] k8s.go 608: Cleaning up netns ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.274 [INFO][4789] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" iface="eth0" netns="" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.274 [INFO][4789] k8s.go 615: Releasing IP address(es) ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.274 [INFO][4789] utils.go 188: Calico CNI releasing IP address ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.289 [INFO][4796] ipam_plugin.go 411: Releasing address using handleID ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.289 [INFO][4796] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.289 [INFO][4796] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.292 [WARNING][4796] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.292 [INFO][4796] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" HandleID="k8s-pod-network.7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Workload="localhost-k8s-calico--kube--controllers--56785499b5--88q2j-eth0" Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.293 [INFO][4796] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:32.295673 containerd[1339]: 2024-06-25 16:23:32.294 [INFO][4789] k8s.go 621: Teardown processing complete. ContainerID="7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613" Jun 25 16:23:32.299368 containerd[1339]: time="2024-06-25T16:23:32.295698939Z" level=info msg="TearDown network for sandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\" successfully" Jun 25 16:23:32.304357 containerd[1339]: time="2024-06-25T16:23:32.304338223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:23:32.304397 containerd[1339]: time="2024-06-25T16:23:32.304376408Z" level=info msg="RemovePodSandbox \"7fc3f00b3f5ff5f17613e91c40076c5610f644cbfc8cee6aa1272966bfeb9613\" returns successfully" Jun 25 16:23:32.304635 containerd[1339]: time="2024-06-25T16:23:32.304621534Z" level=info msg="StopPodSandbox for \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\"" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.329 [WARNING][4814] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47c466fb-1e8e-485b-9ff4-0f34f0fde19b", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15", Pod:"coredns-7db6d8ff4d-9nn2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff6e61f6a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.329 [INFO][4814] k8s.go 608: Cleaning up netns ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.329 [INFO][4814] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" iface="eth0" netns="" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.329 [INFO][4814] k8s.go 615: Releasing IP address(es) ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.329 [INFO][4814] utils.go 188: Calico CNI releasing IP address ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.352 [INFO][4821] ipam_plugin.go 411: Releasing address using handleID ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.352 [INFO][4821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.353 [INFO][4821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.356 [WARNING][4821] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.356 [INFO][4821] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.356 [INFO][4821] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:32.359126 containerd[1339]: 2024-06-25 16:23:32.357 [INFO][4814] k8s.go 621: Teardown processing complete. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.359126 containerd[1339]: time="2024-06-25T16:23:32.358884708Z" level=info msg="TearDown network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\" successfully" Jun 25 16:23:32.359126 containerd[1339]: time="2024-06-25T16:23:32.358903973Z" level=info msg="StopPodSandbox for \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\" returns successfully" Jun 25 16:23:32.359762 containerd[1339]: time="2024-06-25T16:23:32.359747729Z" level=info msg="RemovePodSandbox for \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\"" Jun 25 16:23:32.359833 containerd[1339]: time="2024-06-25T16:23:32.359809337Z" level=info msg="Forcibly stopping sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\"" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.380 [WARNING][4839] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47c466fb-1e8e-485b-9ff4-0f34f0fde19b", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f5f37a966803058c4664f4c94ac6d04f8ce9b5df961a740a80cc0abc8fecb15", Pod:"coredns-7db6d8ff4d-9nn2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff6e61f6a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.380 [INFO][4839] k8s.go 608: Cleaning up netns ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.380 [INFO][4839] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" iface="eth0" netns="" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.380 [INFO][4839] k8s.go 615: Releasing IP address(es) ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.380 [INFO][4839] utils.go 188: Calico CNI releasing IP address ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.407 [INFO][4845] ipam_plugin.go 411: Releasing address using handleID ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.407 [INFO][4845] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.407 [INFO][4845] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.411 [WARNING][4845] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.411 [INFO][4845] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" HandleID="k8s-pod-network.fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Workload="localhost-k8s-coredns--7db6d8ff4d--9nn2t-eth0" Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.412 [INFO][4845] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:32.414274 containerd[1339]: 2024-06-25 16:23:32.413 [INFO][4839] k8s.go 621: Teardown processing complete. ContainerID="fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b" Jun 25 16:23:32.414700 containerd[1339]: time="2024-06-25T16:23:32.414679431Z" level=info msg="TearDown network for sandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\" successfully" Jun 25 16:23:32.417189 containerd[1339]: time="2024-06-25T16:23:32.417174071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:23:32.417265 containerd[1339]: time="2024-06-25T16:23:32.417252654Z" level=info msg="RemovePodSandbox \"fb96226361e6e9ef0644241e8d79b6e9d91c362acb7b80a258c75dee1ef14e9b\" returns successfully" Jun 25 16:23:32.915127 systemd[1]: run-containerd-runc-k8s.io-0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67-runc.HTgcld.mount: Deactivated successfully. Jun 25 16:23:33.768000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.770566 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:23:33.770605 kernel: audit: type=1400 audit(1719332613.768:650): avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.768000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.773923 kernel: audit: type=1400 audit(1719332613.768:649): avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.773950 kernel: audit: type=1300 audit(1719332613.768:649): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0032198c0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:33.768000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0032198c0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:33.768000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.776512 kernel: audit: type=1327 audit(1719332613.768:649): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.768000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0032eb140 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:33.778619 kernel: audit: type=1300 audit(1719332613.768:650): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0032eb140 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:33.768000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.782668 kernel: audit: type=1327 audit(1719332613.768:650): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.782696 kernel: audit: type=1400 audit(1719332613.774:651): avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.774000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.786868 kernel: audit: type=1300 audit(1719332613.774:651): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0032198e0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:33.786889 kernel: audit: type=1327 audit(1719332613.774:651): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.774000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0032198e0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:33.774000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.802000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:33.802000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003219940 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:23:33.802000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:33.806498 kernel: audit: type=1400 audit(1719332613.802:652): avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:38.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.109:22-139.178.68.195:46740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:38.612467 systemd[1]: Started sshd@7-139.178.70.109:22-139.178.68.195:46740.service - OpenSSH per-connection server daemon (139.178.68.195:46740). Jun 25 16:23:38.677000 audit[4875]: USER_ACCT pid=4875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:38.679326 sshd[4875]: Accepted publickey for core from 139.178.68.195 port 46740 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:23:38.678000 audit[4875]: CRED_ACQ pid=4875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:38.678000 audit[4875]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccac74fd0 a2=3 a3=7f596e13e480 items=0 ppid=1 pid=4875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.678000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:38.680809 sshd[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:38.683822 systemd-logind[1327]: New session 10 of user core. Jun 25 16:23:38.688596 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:23:38.689000 audit[4875]: USER_START pid=4875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:38.690000 audit[4877]: CRED_ACQ pid=4877 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:39.097922 sshd[4875]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:39.097000 audit[4875]: USER_END pid=4875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:39.100495 kernel: kauditd_printk_skb: 10 callbacks suppressed Jun 25 16:23:39.100530 kernel: audit: type=1106 audit(1719332619.097:659): pid=4875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:39.097000 audit[4875]: CRED_DISP pid=4875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:39.104511 kernel: audit: type=1104 audit(1719332619.097:660): pid=4875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:39.105259 systemd[1]: sshd@7-139.178.70.109:22-139.178.68.195:46740.service: Deactivated successfully. Jun 25 16:23:39.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.109:22-139.178.68.195:46740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:39.105767 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:23:39.107300 systemd-logind[1327]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:23:39.107512 kernel: audit: type=1131 audit(1719332619.103:661): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.109:22-139.178.68.195:46740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:39.107824 systemd-logind[1327]: Removed session 10. Jun 25 16:23:44.107987 systemd[1]: Started sshd@8-139.178.70.109:22-139.178.68.195:46754.service - OpenSSH per-connection server daemon (139.178.68.195:46754). Jun 25 16:23:44.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.109:22-139.178.68.195:46754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:44.110495 kernel: audit: type=1130 audit(1719332624.106:662): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.109:22-139.178.68.195:46754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:44.187000 audit[4894]: USER_ACCT pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.189443 sshd[4894]: Accepted publickey for core from 139.178.68.195 port 46754 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:23:44.227636 kernel: audit: type=1101 audit(1719332624.187:663): pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.227678 kernel: audit: type=1103 audit(1719332624.190:664): pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.227701 kernel: audit: type=1006 audit(1719332624.190:665): pid=4894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:23:44.227723 kernel: audit: type=1300 audit(1719332624.190:665): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8592ee40 a2=3 a3=7fd0b383e480 items=0 ppid=1 pid=4894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:44.227744 kernel: audit: type=1327 audit(1719332624.190:665): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:44.227761 kernel: audit: type=1105 audit(1719332624.205:666): pid=4894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.227780 kernel: audit: type=1103 audit(1719332624.206:667): pid=4896 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.190000 audit[4894]: CRED_ACQ pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.190000 audit[4894]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8592ee40 a2=3 a3=7fd0b383e480 items=0 ppid=1 pid=4894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:44.190000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:44.205000 audit[4894]: USER_START pid=4894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.206000 audit[4896]: CRED_ACQ pid=4896 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.197095 systemd-logind[1327]: New session 11 of user core. Jun 25 16:23:44.192517 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:44.203588 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:23:44.415601 sshd[4894]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:44.414000 audit[4894]: USER_END pid=4894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.417457 systemd-logind[1327]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:23:44.414000 audit[4894]: CRED_DISP pid=4894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.418425 systemd[1]: sshd@8-139.178.70.109:22-139.178.68.195:46754.service: Deactivated successfully. Jun 25 16:23:44.419007 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:23:44.420446 kernel: audit: type=1106 audit(1719332624.414:668): pid=4894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.420506 kernel: audit: type=1104 audit(1719332624.414:669): pid=4894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:44.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.109:22-139.178.68.195:46754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:44.420306 systemd-logind[1327]: Removed session 11. Jun 25 16:23:49.426281 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:23:49.428834 kernel: audit: type=1130 audit(1719332629.423:671): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.109:22-139.178.68.195:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:49.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.109:22-139.178.68.195:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:49.425070 systemd[1]: Started sshd@9-139.178.70.109:22-139.178.68.195:34056.service - OpenSSH per-connection server daemon (139.178.68.195:34056). Jun 25 16:23:49.462000 audit[4916]: USER_ACCT pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.464428 sshd[4916]: Accepted publickey for core from 139.178.68.195 port 34056 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:23:49.465714 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:49.463000 audit[4916]: CRED_ACQ pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.466693 kernel: audit: type=1101 audit(1719332629.462:672): pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.466731 kernel: audit: type=1103 audit(1719332629.463:673): pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.469337 kernel: audit: type=1006 audit(1719332629.463:674): pid=4916 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:23:49.463000 audit[4916]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd9d52a30 a2=3 a3=7f9de21e7480 items=0 ppid=1 pid=4916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:49.469854 systemd-logind[1327]: New session 12 of user core. Jun 25 16:23:49.473399 kernel: audit: type=1300 audit(1719332629.463:674): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd9d52a30 a2=3 a3=7f9de21e7480 items=0 ppid=1 pid=4916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:49.473425 kernel: audit: type=1327 audit(1719332629.463:674): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:49.463000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:49.472639 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:23:49.476000 audit[4916]: USER_START pid=4916 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.481600 kernel: audit: type=1105 audit(1719332629.476:675): pid=4916 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.481639 kernel: audit: type=1103 audit(1719332629.479:676): pid=4918 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.479000 audit[4918]: CRED_ACQ pid=4918 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.717632 sshd[4916]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:49.717000 audit[4916]: USER_END pid=4916 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.717000 audit[4916]: CRED_DISP pid=4916 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.723769 kernel: audit: type=1106 audit(1719332629.717:677): pid=4916 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.723811 kernel: audit: type=1104 audit(1719332629.717:678): pid=4916 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.725751 systemd[1]: sshd@9-139.178.70.109:22-139.178.68.195:34056.service: Deactivated successfully. Jun 25 16:23:49.726238 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:23:49.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.109:22-139.178.68.195:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:49.727159 systemd-logind[1327]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:23:49.727817 systemd[1]: Started sshd@10-139.178.70.109:22-139.178.68.195:34062.service - OpenSSH per-connection server daemon (139.178.68.195:34062). Jun 25 16:23:49.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.109:22-139.178.68.195:34062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:49.729230 systemd-logind[1327]: Removed session 12. Jun 25 16:23:49.762000 audit[4935]: USER_ACCT pid=4935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.763765 sshd[4935]: Accepted publickey for core from 139.178.68.195 port 34062 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:23:49.763000 audit[4935]: CRED_ACQ pid=4935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.763000 audit[4935]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd12285c40 a2=3 a3=7f4601961480 items=0 ppid=1 pid=4935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:49.763000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:49.767432 systemd-logind[1327]: New session 13 of user core. Jun 25 16:23:49.764899 sshd[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:49.770599 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:23:49.771000 audit[4935]: USER_START pid=4935 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.772000 audit[4937]: CRED_ACQ pid=4937 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.916509 sshd[4935]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:49.916000 audit[4935]: USER_END pid=4935 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.916000 audit[4935]: CRED_DISP pid=4935 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.920972 systemd[1]: sshd@10-139.178.70.109:22-139.178.68.195:34062.service: Deactivated successfully. Jun 25 16:23:49.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.109:22-139.178.68.195:34062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:49.921388 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:23:49.922694 systemd-logind[1327]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:23:49.929049 systemd[1]: Started sshd@11-139.178.70.109:22-139.178.68.195:34070.service - OpenSSH per-connection server daemon (139.178.68.195:34070). Jun 25 16:23:49.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.109:22-139.178.68.195:34070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:49.930460 systemd-logind[1327]: Removed session 13. Jun 25 16:23:49.965000 audit[4944]: USER_ACCT pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.966802 sshd[4944]: Accepted publickey for core from 139.178.68.195 port 34070 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:23:49.966000 audit[4944]: CRED_ACQ pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.966000 audit[4944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3406c360 a2=3 a3=7f99d3636480 items=0 ppid=1 pid=4944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:49.966000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:49.967994 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:49.970942 systemd-logind[1327]: New session 14 of user core. Jun 25 16:23:49.973581 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:23:49.975000 audit[4944]: USER_START pid=4944 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:49.976000 audit[4946]: CRED_ACQ pid=4946 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:50.154852 sshd[4944]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:50.154000 audit[4944]: USER_END pid=4944 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:50.154000 audit[4944]: CRED_DISP pid=4944 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:50.156682 systemd-logind[1327]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:23:50.157586 systemd[1]: sshd@11-139.178.70.109:22-139.178.68.195:34070.service: Deactivated successfully. Jun 25 16:23:50.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.109:22-139.178.68.195:34070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:50.158112 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:23:50.158686 systemd-logind[1327]: Removed session 14. Jun 25 16:23:51.261878 systemd[1]: run-containerd-runc-k8s.io-60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb-runc.9vfoGh.mount: Deactivated successfully. Jun 25 16:23:55.160878 systemd[1]: Started sshd@12-139.178.70.109:22-139.178.68.195:34086.service - OpenSSH per-connection server daemon (139.178.68.195:34086). Jun 25 16:23:55.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.109:22-139.178.68.195:34086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:55.163910 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:23:55.163953 kernel: audit: type=1130 audit(1719332635.160:698): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.109:22-139.178.68.195:34086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:55.438000 audit[4982]: USER_ACCT pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.440562 sshd[4982]: Accepted publickey for core from 139.178.68.195 port 34086 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:23:55.442506 kernel: audit: type=1101 audit(1719332635.438:699): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.441000 audit[4982]: CRED_ACQ pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.443016 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:55.446500 kernel: audit: type=1103 audit(1719332635.441:700): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.446564 kernel: audit: type=1006 audit(1719332635.441:701): pid=4982 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:23:55.446586 kernel: audit: type=1300 audit(1719332635.441:701): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2bd7bd40 a2=3 a3=7f6b56589480 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.441000 audit[4982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2bd7bd40 a2=3 a3=7f6b56589480 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:55.441000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:55.449890 kernel: audit: type=1327 audit(1719332635.441:701): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:55.449560 systemd-logind[1327]: New session 15 of user core. Jun 25 16:23:55.453640 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:23:55.455000 audit[4982]: USER_START pid=4982 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.455000 audit[4984]: CRED_ACQ pid=4984 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.461782 kernel: audit: type=1105 audit(1719332635.455:702): pid=4982 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.461828 kernel: audit: type=1103 audit(1719332635.455:703): pid=4984 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.582651 sshd[4982]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:55.581000 audit[4982]: USER_END pid=4982 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.582000 audit[4982]: CRED_DISP pid=4982 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.586410 systemd[1]: sshd@12-139.178.70.109:22-139.178.68.195:34086.service: Deactivated successfully. Jun 25 16:23:55.586968 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:23:55.587819 kernel: audit: type=1106 audit(1719332635.581:704): pid=4982 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.587869 kernel: audit: type=1104 audit(1719332635.582:705): pid=4982 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:23:55.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.109:22-139.178.68.195:34086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:55.588078 systemd-logind[1327]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:23:55.588755 systemd-logind[1327]: Removed session 15. Jun 25 16:24:00.594856 systemd[1]: Started sshd@13-139.178.70.109:22-139.178.68.195:57208.service - OpenSSH per-connection server daemon (139.178.68.195:57208). Jun 25 16:24:00.598362 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:00.598389 kernel: audit: type=1130 audit(1719332640.593:707): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.109:22-139.178.68.195:57208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:00.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.109:22-139.178.68.195:57208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:00.623000 audit[5001]: USER_ACCT pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.632452 kernel: audit: type=1101 audit(1719332640.623:708): pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.632498 kernel: audit: type=1103 audit(1719332640.625:709): pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.632523 kernel: audit: type=1006 audit(1719332640.627:710): pid=5001 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:24:00.632538 kernel: audit: type=1300 audit(1719332640.627:710): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd8f71130 a2=3 a3=7f34b36de480 items=0 ppid=1 pid=5001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.632553 kernel: audit: type=1327 audit(1719332640.627:710): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:00.625000 audit[5001]: CRED_ACQ pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.627000 audit[5001]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd8f71130 a2=3 a3=7f34b36de480 items=0 ppid=1 pid=5001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.627000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:00.632663 sshd[5001]: Accepted publickey for core from 139.178.68.195 port 57208 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:00.631844 systemd-logind[1327]: New session 16 of user core. Jun 25 16:24:00.628758 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:00.637612 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:24:00.639000 audit[5001]: USER_START pid=5001 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.640000 audit[5003]: CRED_ACQ pid=5003 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.645000 kernel: audit: type=1105 audit(1719332640.639:711): pid=5001 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.645041 kernel: audit: type=1103 audit(1719332640.640:712): pid=5003 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.735469 sshd[5001]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:00.734000 audit[5001]: USER_END pid=5001 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.735000 audit[5001]: CRED_DISP pid=5001 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.739496 kernel: audit: type=1106 audit(1719332640.734:713): pid=5001 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.739531 kernel: audit: type=1104 audit(1719332640.735:714): pid=5001 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:00.738832 systemd[1]: sshd@13-139.178.70.109:22-139.178.68.195:57208.service: Deactivated successfully. Jun 25 16:24:00.739284 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:24:00.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.109:22-139.178.68.195:57208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:00.740250 systemd-logind[1327]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:24:00.740825 systemd-logind[1327]: Removed session 16. Jun 25 16:24:02.915438 systemd[1]: run-containerd-runc-k8s.io-0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67-runc.qfHUx0.mount: Deactivated successfully. Jun 25 16:24:05.745716 systemd[1]: Started sshd@14-139.178.70.109:22-139.178.68.195:57222.service - OpenSSH per-connection server daemon (139.178.68.195:57222). Jun 25 16:24:05.746856 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:05.746901 kernel: audit: type=1130 audit(1719332645.744:716): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.109:22-139.178.68.195:57222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:05.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.109:22-139.178.68.195:57222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:05.784000 audit[5034]: USER_ACCT pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.786544 sshd[5034]: Accepted publickey for core from 139.178.68.195 port 57222 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:05.786000 audit[5034]: CRED_ACQ pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.789728 kernel: audit: type=1101 audit(1719332645.784:717): pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.789761 kernel: audit: type=1103 audit(1719332645.786:718): pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.793215 kernel: audit: type=1006 audit(1719332645.786:719): pid=5034 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:24:05.793240 kernel: audit: type=1300 audit(1719332645.786:719): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff247d7b20 a2=3 a3=7f03e6cf3480 items=0 ppid=1 pid=5034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.793256 kernel: audit: type=1327 audit(1719332645.786:719): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:05.786000 audit[5034]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff247d7b20 a2=3 a3=7f03e6cf3480 items=0 ppid=1 pid=5034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.786000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:05.789990 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:05.793639 systemd-logind[1327]: New session 17 of user core. Jun 25 16:24:05.795619 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:24:05.797000 audit[5034]: USER_START pid=5034 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.801770 kernel: audit: type=1105 audit(1719332645.797:720): pid=5034 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.801798 kernel: audit: type=1103 audit(1719332645.800:721): pid=5036 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.800000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.904289 sshd[5034]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:05.904000 audit[5034]: USER_END pid=5034 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.905000 audit[5034]: CRED_DISP pid=5034 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.908519 systemd[1]: sshd@14-139.178.70.109:22-139.178.68.195:57222.service: Deactivated successfully. Jun 25 16:24:05.909030 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:24:05.911149 kernel: audit: type=1106 audit(1719332645.904:722): pid=5034 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.911206 kernel: audit: type=1104 audit(1719332645.905:723): pid=5034 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:05.911185 systemd-logind[1327]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:24:05.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.109:22-139.178.68.195:57222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:05.912041 systemd-logind[1327]: Removed session 17. Jun 25 16:24:10.911095 systemd[1]: Started sshd@15-139.178.70.109:22-139.178.68.195:43872.service - OpenSSH per-connection server daemon (139.178.68.195:43872). Jun 25 16:24:10.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.109:22-139.178.68.195:43872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:10.912612 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:10.912649 kernel: audit: type=1130 audit(1719332650.910:725): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.109:22-139.178.68.195:43872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.220000 audit[5050]: USER_ACCT pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.221951 sshd[5050]: Accepted publickey for core from 139.178.68.195 port 43872 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:11.222000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.224573 kernel: audit: type=1101 audit(1719332651.220:726): pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.224614 kernel: audit: type=1103 audit(1719332651.222:727): pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.227421 kernel: audit: type=1006 audit(1719332651.222:728): pid=5050 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:24:11.227452 kernel: audit: type=1300 audit(1719332651.222:728): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd750b910 a2=3 a3=7f65fbc99480 items=0 ppid=1 pid=5050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:11.222000 audit[5050]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd750b910 a2=3 a3=7f65fbc99480 items=0 ppid=1 pid=5050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:11.222000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:11.229541 kernel: audit: type=1327 audit(1719332651.222:728): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:11.235420 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:11.240710 systemd-logind[1327]: New session 18 of user core. Jun 25 16:24:11.244581 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:24:11.246000 audit[5050]: USER_START pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.247000 audit[5052]: CRED_ACQ pid=5052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.251943 kernel: audit: type=1105 audit(1719332651.246:729): pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.251997 kernel: audit: type=1103 audit(1719332651.247:730): pid=5052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.109:22-139.178.68.195:43888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.562372 systemd[1]: Started sshd@16-139.178.70.109:22-139.178.68.195:43888.service - OpenSSH per-connection server daemon (139.178.68.195:43888). Jun 25 16:24:11.564511 kernel: audit: type=1130 audit(1719332651.561:731): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.109:22-139.178.68.195:43888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.565956 sshd[5050]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:11.569000 audit[5050]: USER_END pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.574638 kernel: audit: type=1106 audit(1719332651.569:732): pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.574782 systemd[1]: sshd@15-139.178.70.109:22-139.178.68.195:43872.service: Deactivated successfully. Jun 25 16:24:11.575495 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:24:11.572000 audit[5050]: CRED_DISP pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.109:22-139.178.68.195:43872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.576097 systemd-logind[1327]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:24:11.576700 systemd-logind[1327]: Removed session 18. Jun 25 16:24:11.607000 audit[5060]: USER_ACCT pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.609402 sshd[5060]: Accepted publickey for core from 139.178.68.195 port 43888 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:11.608000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.608000 audit[5060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfa273b20 a2=3 a3=7fa5981e4480 items=0 ppid=1 pid=5060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:11.608000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:11.610810 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:11.613279 systemd-logind[1327]: New session 19 of user core. Jun 25 16:24:11.619586 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:24:11.621000 audit[5060]: USER_START pid=5060 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:11.622000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:12.085143 sshd[5060]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:12.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.109:22-139.178.68.195:43892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.087061 systemd[1]: Started sshd@17-139.178.70.109:22-139.178.68.195:43892.service - OpenSSH per-connection server daemon (139.178.68.195:43892). Jun 25 16:24:12.086000 audit[5060]: USER_END pid=5060 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:12.086000 audit[5060]: CRED_DISP pid=5060 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:12.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.109:22-139.178.68.195:43888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.094259 systemd[1]: sshd@16-139.178.70.109:22-139.178.68.195:43888.service: Deactivated successfully. Jun 25 16:24:12.094820 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:24:12.095561 systemd-logind[1327]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:24:12.096757 systemd-logind[1327]: Removed session 19. Jun 25 16:24:12.164000 audit[5070]: USER_ACCT pid=5070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:12.166250 sshd[5070]: Accepted publickey for core from 139.178.68.195 port 43892 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:12.165000 audit[5070]: CRED_ACQ pid=5070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:12.165000 audit[5070]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe85d0fe0 a2=3 a3=7f8bd08f4480 items=0 ppid=1 pid=5070 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.165000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:12.167972 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:12.170922 systemd-logind[1327]: New session 20 of user core. Jun 25 16:24:12.174611 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:24:12.176000 audit[5070]: USER_START pid=5070 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:12.177000 audit[5073]: CRED_ACQ pid=5073 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:13.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.109:22-139.178.68.195:43898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.648090 sshd[5070]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:13.650000 audit[5070]: USER_END pid=5070 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:13.649082 systemd[1]: Started sshd@18-139.178.70.109:22-139.178.68.195:43898.service - OpenSSH per-connection server daemon (139.178.68.195:43898). Jun 25 16:24:13.650000 audit[5070]: CRED_DISP pid=5070 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:13.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.109:22-139.178.68.195:43892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.653261 systemd[1]: sshd@17-139.178.70.109:22-139.178.68.195:43892.service: Deactivated successfully. Jun 25 16:24:13.653748 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:24:13.654516 systemd-logind[1327]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:24:13.653000 audit[5089]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:13.653000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcd9fac080 a2=0 a3=7ffcd9fac06c items=0 ppid=2589 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:13.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:13.656000 audit[5089]: NETFILTER_CFG table=nat:128 family=2 entries=22 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:13.656000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcd9fac080 a2=0 a3=0 items=0 ppid=2589 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:13.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:13.655958 systemd-logind[1327]: Removed session 20. Jun 25 16:24:13.663000 audit[5093]: NETFILTER_CFG table=filter:129 family=2 entries=32 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:13.663000 audit[5093]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffdc2224b10 a2=0 a3=7ffdc2224afc items=0 ppid=2589 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:13.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:13.664000 audit[5093]: NETFILTER_CFG table=nat:130 family=2 entries=22 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:13.664000 audit[5093]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdc2224b10 a2=0 a3=0 items=0 ppid=2589 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:13.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:13.701000 audit[5090]: USER_ACCT pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:13.702885 sshd[5090]: Accepted publickey for core from 139.178.68.195 port 43898 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:13.702000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:13.702000 audit[5090]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe93a09c30 a2=3 a3=7fe561487480 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:13.702000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:13.706969 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:13.710407 systemd-logind[1327]: New session 21 of user core. Jun 25 16:24:13.714586 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:24:13.717000 audit[5090]: USER_START pid=5090 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:13.718000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.115454 sshd[5090]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:14.114000 audit[5090]: USER_END pid=5090 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.114000 audit[5090]: CRED_DISP pid=5090 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.120129 systemd[1]: sshd@18-139.178.70.109:22-139.178.68.195:43898.service: Deactivated successfully. Jun 25 16:24:14.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.109:22-139.178.68.195:43898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:14.120662 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:24:14.121076 systemd-logind[1327]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:24:14.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.109:22-139.178.68.195:43908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:14.122267 systemd[1]: Started sshd@19-139.178.70.109:22-139.178.68.195:43908.service - OpenSSH per-connection server daemon (139.178.68.195:43908). Jun 25 16:24:14.123562 systemd-logind[1327]: Removed session 21. Jun 25 16:24:14.151000 audit[5103]: USER_ACCT pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.153088 sshd[5103]: Accepted publickey for core from 139.178.68.195 port 43908 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:14.152000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.152000 audit[5103]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdce5c53c0 a2=3 a3=7f55ad78f480 items=0 ppid=1 pid=5103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:14.152000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:14.154026 sshd[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:14.157228 systemd-logind[1327]: New session 22 of user core. Jun 25 16:24:14.161578 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:24:14.163000 audit[5103]: USER_START pid=5103 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.164000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.269419 sshd[5103]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:14.268000 audit[5103]: USER_END pid=5103 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.268000 audit[5103]: CRED_DISP pid=5103 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:14.271400 systemd[1]: sshd@19-139.178.70.109:22-139.178.68.195:43908.service: Deactivated successfully. Jun 25 16:24:14.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.109:22-139.178.68.195:43908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:14.271777 systemd-logind[1327]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:24:14.271888 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:24:14.272377 systemd-logind[1327]: Removed session 22. Jun 25 16:24:16.759648 systemd[1]: run-containerd-runc-k8s.io-0ecbbd0f193c8bd397e12348eadfa20af93b2de7f403cc54a6b981b4f01f4b67-runc.xSEOdu.mount: Deactivated successfully. Jun 25 16:24:18.557000 audit[5135]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:18.560412 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:24:18.560450 kernel: audit: type=1325 audit(1719332658.557:774): table=filter:131 family=2 entries=20 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:18.560468 kernel: audit: type=1300 audit(1719332658.557:774): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffcb311e90 a2=0 a3=7fffcb311e7c items=0 ppid=2589 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:18.562750 kernel: audit: type=1327 audit(1719332658.557:774): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:18.557000 audit[5135]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffcb311e90 a2=0 a3=7fffcb311e7c items=0 ppid=2589 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:18.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:18.558000 audit[5135]: NETFILTER_CFG table=nat:132 family=2 entries=106 op=nft_register_chain pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:18.558000 audit[5135]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fffcb311e90 a2=0 a3=7fffcb311e7c items=0 ppid=2589 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:18.568461 kernel: audit: type=1325 audit(1719332658.558:775): table=nat:132 family=2 entries=106 op=nft_register_chain pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:18.568509 kernel: audit: type=1300 audit(1719332658.558:775): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fffcb311e90 a2=0 a3=7fffcb311e7c items=0 ppid=2589 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:18.568530 kernel: audit: type=1327 audit(1719332658.558:775): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:18.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:19.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.109:22-139.178.68.195:42258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:19.279427 systemd[1]: Started sshd@20-139.178.70.109:22-139.178.68.195:42258.service - OpenSSH per-connection server daemon (139.178.68.195:42258). Jun 25 16:24:19.282507 kernel: audit: type=1130 audit(1719332659.278:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.109:22-139.178.68.195:42258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:19.330000 audit[5138]: USER_ACCT pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.332527 sshd[5138]: Accepted publickey for core from 139.178.68.195 port 42258 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:19.331000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.334903 kernel: audit: type=1101 audit(1719332659.330:777): pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.334942 kernel: audit: type=1103 audit(1719332659.331:778): pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.337284 sshd[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:19.338679 kernel: audit: type=1006 audit(1719332659.331:779): pid=5138 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:24:19.331000 audit[5138]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeec498500 a2=3 a3=7f3c3ed3e480 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:19.331000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:19.342212 systemd-logind[1327]: New session 23 of user core. Jun 25 16:24:19.346673 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:24:19.349000 audit[5138]: USER_START pid=5138 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.350000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.532538 sshd[5138]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:19.531000 audit[5138]: USER_END pid=5138 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.531000 audit[5138]: CRED_DISP pid=5138 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:19.534647 systemd-logind[1327]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:24:19.534812 systemd[1]: sshd@20-139.178.70.109:22-139.178.68.195:42258.service: Deactivated successfully. Jun 25 16:24:19.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.109:22-139.178.68.195:42258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:19.535302 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:24:19.535745 systemd-logind[1327]: Removed session 23. Jun 25 16:24:21.273159 systemd[1]: run-containerd-runc-k8s.io-60ccb113fcc0f75a060e1cf027fee66f644555a5902b5316039a35dd27991dbb-runc.I0SWoo.mount: Deactivated successfully. Jun 25 16:24:24.544772 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:24:24.544854 kernel: audit: type=1130 audit(1719332664.542:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.109:22-139.178.68.195:42264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:24.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.109:22-139.178.68.195:42264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:24.543739 systemd[1]: Started sshd@21-139.178.70.109:22-139.178.68.195:42264.service - OpenSSH per-connection server daemon (139.178.68.195:42264). Jun 25 16:24:24.583000 audit[5178]: USER_ACCT pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.585730 sshd[5178]: Accepted publickey for core from 139.178.68.195 port 42264 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:24.589569 kernel: audit: type=1101 audit(1719332664.583:786): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.589616 kernel: audit: type=1103 audit(1719332664.586:787): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.586000 audit[5178]: CRED_ACQ pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.590755 sshd[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:24.592570 kernel: audit: type=1006 audit(1719332664.588:788): pid=5178 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:24:24.588000 audit[5178]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2580ed60 a2=3 a3=7f2ee1182480 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:24.595862 kernel: audit: type=1300 audit(1719332664.588:788): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2580ed60 a2=3 a3=7f2ee1182480 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:24.596020 kernel: audit: type=1327 audit(1719332664.588:788): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:24.588000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:24.597647 systemd-logind[1327]: New session 24 of user core. Jun 25 16:24:24.599595 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:24:24.601000 audit[5178]: USER_START pid=5178 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.602000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.607794 kernel: audit: type=1105 audit(1719332664.601:789): pid=5178 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.607829 kernel: audit: type=1103 audit(1719332664.602:790): pid=5180 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.756452 sshd[5178]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:24.755000 audit[5178]: USER_END pid=5178 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.759496 kernel: audit: type=1106 audit(1719332664.755:791): pid=5178 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.758000 audit[5178]: CRED_DISP pid=5178 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.760471 systemd[1]: sshd@21-139.178.70.109:22-139.178.68.195:42264.service: Deactivated successfully. Jun 25 16:24:24.760948 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:24:24.761495 kernel: audit: type=1104 audit(1719332664.758:792): pid=5178 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:24.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.109:22-139.178.68.195:42264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:24.761754 systemd-logind[1327]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:24:24.762243 systemd-logind[1327]: Removed session 24. Jun 25 16:24:28.208000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.208000 audit[2321]: AVC avc: denied { watch } for pid=2321 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c391,c737 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.208000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002855a70 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:24:28.208000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:28.208000 audit[2321]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001bfcba0 a2=fc6 a3=0 items=0 ppid=2152 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c391,c737 key=(null) Jun 25 16:24:28.208000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:28.894000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.894000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=1041986 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.894000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7a a1=c00fbfbbc0 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:24:28.894000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:24:28.894000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c010372510 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:24:28.894000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:24:28.894000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=1041980 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.894000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c010372540 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:24:28.894000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:24:28.906000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.906000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c005d63ae0 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:24:28.906000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:24:28.909000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.909000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c010372720 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:24:28.909000 audit[2331]: AVC avc: denied { watch } for pid=2331 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c266,c375 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:28.909000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:24:28.909000 audit[2331]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7a a1=c004c94b40 a2=fc6 a3=0 items=0 ppid=2154 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c266,c375 key=(null) Jun 25 16:24:28.909000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313039002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:24:29.764111 systemd[1]: Started sshd@22-139.178.70.109:22-139.178.68.195:41704.service - OpenSSH per-connection server daemon (139.178.68.195:41704). Jun 25 16:24:29.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.109:22-139.178.68.195:41704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:29.764858 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:24:29.765002 kernel: audit: type=1130 audit(1719332669.762:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.109:22-139.178.68.195:41704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:29.928000 audit[5195]: USER_ACCT pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:29.930450 sshd[5195]: Accepted publickey for core from 139.178.68.195 port 41704 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:24:29.929000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:29.931946 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:29.933741 kernel: audit: type=1101 audit(1719332669.928:803): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:29.933780 kernel: audit: type=1103 audit(1719332669.929:804): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:29.933797 kernel: audit: type=1006 audit(1719332669.929:805): pid=5195 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:24:29.935765 kernel: audit: type=1300 audit(1719332669.929:805): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5fa179d0 a2=3 a3=7fb85d36f480 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.929000 audit[5195]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5fa179d0 a2=3 a3=7fb85d36f480 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.936260 systemd-logind[1327]: New session 25 of user core. Jun 25 16:24:29.942974 kernel: audit: type=1327 audit(1719332669.929:805): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:29.929000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:29.942634 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:24:29.945000 audit[5195]: USER_START pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:29.946000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:29.950673 kernel: audit: type=1105 audit(1719332669.945:806): pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:29.950710 kernel: audit: type=1103 audit(1719332669.946:807): pid=5199 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:30.374992 sshd[5195]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:30.375000 audit[5195]: USER_END pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:30.377680 systemd[1]: sshd@22-139.178.70.109:22-139.178.68.195:41704.service: Deactivated successfully. Jun 25 16:24:30.378558 kernel: audit: type=1106 audit(1719332670.375:808): pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:30.378190 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:24:30.375000 audit[5195]: CRED_DISP pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:30.380551 systemd-logind[1327]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:24:30.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.109:22-139.178.68.195:41704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:30.381521 kernel: audit: type=1104 audit(1719332670.375:809): pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:24:30.381474 systemd-logind[1327]: Removed session 25.