Jun 25 16:31:44.719988 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:31:44.720001 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:31:44.720007 kernel: Disabled fast string operations Jun 25 16:31:44.720011 kernel: BIOS-provided physical RAM map: Jun 25 16:31:44.720015 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jun 25 16:31:44.720018 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jun 25 16:31:44.720024 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jun 25 16:31:44.720028 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jun 25 16:31:44.720031 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jun 25 16:31:44.720035 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jun 25 16:31:44.720039 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jun 25 16:31:44.720042 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jun 25 16:31:44.720046 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jun 25 16:31:44.720050 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jun 25 16:31:44.720056 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jun 25 16:31:44.720060 kernel: NX (Execute Disable) protection: active Jun 25 16:31:44.720064 kernel: SMBIOS 2.7 present. Jun 25 16:31:44.720068 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jun 25 16:31:44.720072 kernel: vmware: hypercall mode: 0x00 Jun 25 16:31:44.720077 kernel: Hypervisor detected: VMware Jun 25 16:31:44.720081 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jun 25 16:31:44.720093 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jun 25 16:31:44.720097 kernel: vmware: using clock offset of 2613108861 ns Jun 25 16:31:44.720102 kernel: tsc: Detected 3408.000 MHz processor Jun 25 16:31:44.720106 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:31:44.720111 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:31:44.720115 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jun 25 16:31:44.720119 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:31:44.720123 kernel: total RAM covered: 3072M Jun 25 16:31:44.720128 kernel: Found optimal setting for mtrr clean up Jun 25 16:31:44.720132 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jun 25 16:31:44.720138 kernel: Using GB pages for direct mapping Jun 25 16:31:44.720142 kernel: ACPI: Early table checksum verification disabled Jun 25 16:31:44.720146 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jun 25 16:31:44.720150 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jun 25 16:31:44.720154 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jun 25 16:31:44.720159 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jun 25 16:31:44.720163 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 25 16:31:44.720167 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 25 16:31:44.720172 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jun 25 16:31:44.720178 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jun 25 16:31:44.720183 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jun 25 16:31:44.720188 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jun 25 16:31:44.720192 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jun 25 16:31:44.720197 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jun 25 16:31:44.720202 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jun 25 16:31:44.720207 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jun 25 16:31:44.720212 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 25 16:31:44.720217 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 25 16:31:44.720221 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jun 25 16:31:44.720226 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jun 25 16:31:44.720231 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jun 25 16:31:44.720235 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jun 25 16:31:44.720240 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jun 25 16:31:44.720245 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jun 25 16:31:44.720250 kernel: system APIC only can use physical flat Jun 25 16:31:44.720254 kernel: Setting APIC routing to physical flat. Jun 25 16:31:44.720259 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:31:44.720264 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jun 25 16:31:44.720268 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jun 25 16:31:44.720273 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jun 25 16:31:44.720277 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jun 25 16:31:44.720282 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jun 25 16:31:44.720287 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jun 25 16:31:44.720292 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jun 25 16:31:44.720297 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jun 25 16:31:44.720301 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jun 25 16:31:44.720305 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jun 25 16:31:44.720310 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jun 25 16:31:44.720330 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jun 25 16:31:44.720334 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jun 25 16:31:44.720338 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jun 25 16:31:44.720343 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jun 25 16:31:44.720347 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jun 25 16:31:44.720352 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jun 25 16:31:44.720357 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jun 25 16:31:44.720361 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jun 25 16:31:44.720366 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jun 25 16:31:44.720370 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jun 25 16:31:44.720374 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jun 25 16:31:44.720379 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jun 25 16:31:44.720383 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jun 25 16:31:44.720388 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jun 25 16:31:44.720392 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jun 25 16:31:44.720397 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jun 25 16:31:44.720402 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jun 25 16:31:44.720406 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jun 25 16:31:44.720411 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jun 25 16:31:44.720415 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jun 25 16:31:44.720420 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jun 25 16:31:44.720424 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jun 25 16:31:44.720428 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jun 25 16:31:44.720433 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jun 25 16:31:44.720437 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jun 25 16:31:44.720443 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jun 25 16:31:44.720447 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jun 25 16:31:44.720452 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jun 25 16:31:44.720456 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jun 25 16:31:44.720460 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jun 25 16:31:44.720465 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jun 25 16:31:44.720469 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jun 25 16:31:44.720474 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jun 25 16:31:44.720478 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jun 25 16:31:44.720483 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jun 25 16:31:44.720488 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jun 25 16:31:44.720492 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jun 25 16:31:44.720497 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jun 25 16:31:44.720501 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jun 25 16:31:44.720506 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jun 25 16:31:44.720510 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jun 25 16:31:44.720514 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jun 25 16:31:44.720522 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jun 25 16:31:44.720527 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jun 25 16:31:44.720531 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jun 25 16:31:44.720535 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jun 25 16:31:44.720541 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jun 25 16:31:44.720545 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jun 25 16:31:44.720550 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jun 25 16:31:44.720558 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jun 25 16:31:44.720563 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jun 25 16:31:44.720567 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jun 25 16:31:44.720572 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jun 25 16:31:44.720577 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jun 25 16:31:44.720582 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jun 25 16:31:44.720587 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jun 25 16:31:44.720592 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jun 25 16:31:44.720597 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jun 25 16:31:44.720601 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jun 25 16:31:44.720606 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jun 25 16:31:44.720611 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jun 25 16:31:44.720616 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jun 25 16:31:44.720620 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jun 25 16:31:44.720625 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jun 25 16:31:44.720631 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jun 25 16:31:44.720635 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jun 25 16:31:44.720640 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jun 25 16:31:44.720645 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jun 25 16:31:44.720649 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jun 25 16:31:44.720654 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jun 25 16:31:44.720659 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jun 25 16:31:44.720664 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jun 25 16:31:44.720668 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jun 25 16:31:44.720673 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jun 25 16:31:44.720678 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jun 25 16:31:44.720683 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jun 25 16:31:44.720688 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jun 25 16:31:44.720693 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jun 25 16:31:44.720697 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jun 25 16:31:44.720702 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jun 25 16:31:44.720707 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jun 25 16:31:44.720711 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jun 25 16:31:44.720716 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jun 25 16:31:44.720720 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jun 25 16:31:44.720726 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jun 25 16:31:44.720731 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jun 25 16:31:44.720736 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jun 25 16:31:44.720740 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jun 25 16:31:44.720745 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jun 25 16:31:44.720750 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jun 25 16:31:44.720754 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jun 25 16:31:44.720759 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jun 25 16:31:44.720764 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jun 25 16:31:44.720769 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jun 25 16:31:44.720773 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jun 25 16:31:44.720779 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jun 25 16:31:44.720783 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jun 25 16:31:44.720788 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jun 25 16:31:44.720793 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jun 25 16:31:44.720798 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jun 25 16:31:44.720802 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jun 25 16:31:44.720807 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jun 25 16:31:44.720812 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jun 25 16:31:44.720816 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jun 25 16:31:44.720821 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jun 25 16:31:44.720827 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jun 25 16:31:44.720832 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jun 25 16:31:44.720837 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jun 25 16:31:44.720841 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jun 25 16:31:44.720846 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jun 25 16:31:44.720851 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jun 25 16:31:44.720855 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jun 25 16:31:44.720860 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jun 25 16:31:44.720865 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jun 25 16:31:44.720869 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jun 25 16:31:44.720875 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jun 25 16:31:44.720880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 25 16:31:44.720885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 25 16:31:44.720890 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jun 25 16:31:44.720898 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jun 25 16:31:44.720906 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jun 25 16:31:44.720914 kernel: Zone ranges: Jun 25 16:31:44.720922 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:31:44.720928 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jun 25 16:31:44.720934 kernel: Normal empty Jun 25 16:31:44.720939 kernel: Movable zone start for each node Jun 25 16:31:44.720944 kernel: Early memory node ranges Jun 25 16:31:44.720949 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jun 25 16:31:44.720953 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jun 25 16:31:44.720958 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jun 25 16:31:44.720963 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jun 25 16:31:44.720968 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:31:44.720973 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jun 25 16:31:44.720977 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jun 25 16:31:44.720983 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 25 16:31:44.720988 kernel: system APIC only can use physical flat Jun 25 16:31:44.720993 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 25 16:31:44.720998 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 25 16:31:44.721003 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jun 25 16:31:44.721007 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jun 25 16:31:44.721012 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jun 25 16:31:44.721017 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jun 25 16:31:44.721021 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jun 25 16:31:44.721027 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jun 25 16:31:44.721032 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jun 25 16:31:44.721037 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jun 25 16:31:44.721041 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jun 25 16:31:44.721046 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jun 25 16:31:44.721051 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jun 25 16:31:44.721073 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jun 25 16:31:44.721078 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jun 25 16:31:44.721082 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jun 25 16:31:44.721474 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jun 25 16:31:44.721483 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jun 25 16:31:44.721488 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jun 25 16:31:44.721493 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jun 25 16:31:44.721498 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jun 25 16:31:44.721503 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jun 25 16:31:44.721507 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jun 25 16:31:44.721512 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jun 25 16:31:44.721517 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jun 25 16:31:44.721522 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jun 25 16:31:44.721527 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jun 25 16:31:44.721533 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jun 25 16:31:44.721537 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jun 25 16:31:44.721542 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jun 25 16:31:44.721547 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jun 25 16:31:44.721551 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jun 25 16:31:44.721556 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jun 25 16:31:44.721561 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jun 25 16:31:44.721566 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jun 25 16:31:44.721570 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jun 25 16:31:44.721575 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jun 25 16:31:44.721581 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jun 25 16:31:44.721585 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jun 25 16:31:44.721590 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jun 25 16:31:44.721595 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jun 25 16:31:44.721600 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jun 25 16:31:44.721605 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jun 25 16:31:44.721609 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jun 25 16:31:44.721614 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jun 25 16:31:44.721619 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jun 25 16:31:44.721625 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jun 25 16:31:44.721629 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jun 25 16:31:44.721634 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jun 25 16:31:44.721639 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jun 25 16:31:44.721643 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jun 25 16:31:44.721648 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jun 25 16:31:44.721653 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jun 25 16:31:44.721658 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jun 25 16:31:44.721662 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jun 25 16:31:44.721667 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jun 25 16:31:44.721673 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jun 25 16:31:44.721677 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jun 25 16:31:44.721682 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jun 25 16:31:44.721687 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jun 25 16:31:44.721691 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jun 25 16:31:44.721696 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jun 25 16:31:44.721701 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jun 25 16:31:44.721706 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jun 25 16:31:44.721710 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jun 25 16:31:44.721715 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jun 25 16:31:44.721721 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jun 25 16:31:44.721725 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jun 25 16:31:44.721730 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jun 25 16:31:44.721735 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jun 25 16:31:44.721739 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jun 25 16:31:44.721744 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jun 25 16:31:44.721748 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jun 25 16:31:44.721753 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jun 25 16:31:44.721758 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jun 25 16:31:44.721764 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jun 25 16:31:44.721768 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jun 25 16:31:44.721773 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jun 25 16:31:44.721778 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jun 25 16:31:44.721783 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jun 25 16:31:44.721787 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jun 25 16:31:44.721792 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jun 25 16:31:44.721797 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jun 25 16:31:44.721802 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jun 25 16:31:44.721807 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jun 25 16:31:44.721812 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jun 25 16:31:44.721817 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jun 25 16:31:44.721821 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jun 25 16:31:44.721826 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jun 25 16:31:44.721831 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jun 25 16:31:44.721836 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jun 25 16:31:44.721840 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jun 25 16:31:44.721845 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jun 25 16:31:44.721850 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jun 25 16:31:44.721854 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jun 25 16:31:44.721860 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jun 25 16:31:44.721865 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jun 25 16:31:44.721870 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jun 25 16:31:44.721874 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jun 25 16:31:44.721879 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jun 25 16:31:44.721884 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jun 25 16:31:44.721888 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jun 25 16:31:44.721893 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jun 25 16:31:44.721898 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jun 25 16:31:44.721903 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jun 25 16:31:44.721908 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jun 25 16:31:44.721913 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jun 25 16:31:44.721917 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jun 25 16:31:44.721923 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jun 25 16:31:44.721927 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jun 25 16:31:44.721932 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jun 25 16:31:44.721937 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jun 25 16:31:44.721941 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jun 25 16:31:44.721946 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jun 25 16:31:44.721952 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jun 25 16:31:44.721957 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jun 25 16:31:44.721961 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jun 25 16:31:44.721966 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jun 25 16:31:44.721971 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jun 25 16:31:44.721976 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jun 25 16:31:44.721980 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jun 25 16:31:44.721985 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jun 25 16:31:44.721990 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jun 25 16:31:44.721995 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jun 25 16:31:44.722000 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jun 25 16:31:44.722005 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jun 25 16:31:44.722010 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jun 25 16:31:44.722014 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jun 25 16:31:44.722019 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:31:44.722024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 25 16:31:44.722029 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:31:44.722034 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jun 25 16:31:44.722039 kernel: TSC deadline timer available Jun 25 16:31:44.722045 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jun 25 16:31:44.722050 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jun 25 16:31:44.722055 kernel: Booting paravirtualized kernel on VMware hypervisor Jun 25 16:31:44.722060 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:31:44.722065 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jun 25 16:31:44.722070 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u262144 Jun 25 16:31:44.722075 kernel: pcpu-alloc: s194792 r8192 d30488 u262144 alloc=1*2097152 Jun 25 16:31:44.722079 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jun 25 16:31:44.722091 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jun 25 16:31:44.722100 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jun 25 16:31:44.722105 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jun 25 16:31:44.722109 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jun 25 16:31:44.722114 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jun 25 16:31:44.722119 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jun 25 16:31:44.722132 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jun 25 16:31:44.722138 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jun 25 16:31:44.722143 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jun 25 16:31:44.722149 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jun 25 16:31:44.722154 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jun 25 16:31:44.722159 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jun 25 16:31:44.722164 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jun 25 16:31:44.722169 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jun 25 16:31:44.722174 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jun 25 16:31:44.722179 kernel: Fallback order for Node 0: 0 Jun 25 16:31:44.722184 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jun 25 16:31:44.722189 kernel: Policy zone: DMA32 Jun 25 16:31:44.722195 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:31:44.722202 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:31:44.722207 kernel: random: crng init done Jun 25 16:31:44.722212 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jun 25 16:31:44.722217 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jun 25 16:31:44.722222 kernel: printk: log_buf_len min size: 262144 bytes Jun 25 16:31:44.722227 kernel: printk: log_buf_len: 1048576 bytes Jun 25 16:31:44.722232 kernel: printk: early log buf free: 239640(91%) Jun 25 16:31:44.722238 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:31:44.722244 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:31:44.722249 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:31:44.722255 kernel: Memory: 1933736K/2096628K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 162632K reserved, 0K cma-reserved) Jun 25 16:31:44.722260 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jun 25 16:31:44.722266 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:31:44.722273 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:31:44.722278 kernel: Dynamic Preempt: voluntary Jun 25 16:31:44.722283 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:31:44.722288 kernel: rcu: RCU event tracing is enabled. Jun 25 16:31:44.722293 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jun 25 16:31:44.722299 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:31:44.722304 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:31:44.722309 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:31:44.722314 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:31:44.722319 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jun 25 16:31:44.722325 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jun 25 16:31:44.722330 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jun 25 16:31:44.722336 kernel: Console: colour VGA+ 80x25 Jun 25 16:31:44.722341 kernel: printk: console [tty0] enabled Jun 25 16:31:44.722346 kernel: printk: console [ttyS0] enabled Jun 25 16:31:44.722351 kernel: ACPI: Core revision 20220331 Jun 25 16:31:44.722356 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jun 25 16:31:44.722361 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:31:44.722366 kernel: x2apic enabled Jun 25 16:31:44.722372 kernel: Switched APIC routing to physical x2apic. Jun 25 16:31:44.722378 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:31:44.722383 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 25 16:31:44.722388 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jun 25 16:31:44.722393 kernel: Disabled fast string operations Jun 25 16:31:44.722398 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:31:44.722403 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:31:44.722409 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:31:44.722414 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jun 25 16:31:44.722420 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jun 25 16:31:44.722425 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jun 25 16:31:44.722430 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:31:44.722435 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jun 25 16:31:44.722440 kernel: RETBleed: Mitigation: Enhanced IBRS Jun 25 16:31:44.722447 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:31:44.722452 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:31:44.722457 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:31:44.722462 kernel: SRBDS: Unknown: Dependent on hypervisor status Jun 25 16:31:44.722468 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:31:44.722473 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:31:44.722479 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:31:44.722484 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:31:44.722489 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:31:44.722495 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 16:31:44.722500 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:31:44.722505 kernel: pid_max: default: 131072 minimum: 1024 Jun 25 16:31:44.722510 kernel: LSM: Security Framework initializing Jun 25 16:31:44.722516 kernel: SELinux: Initializing. Jun 25 16:31:44.722550 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:31:44.722557 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:31:44.722562 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jun 25 16:31:44.722568 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:31:44.722573 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:31:44.722578 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:31:44.722584 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:31:44.722589 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:31:44.722596 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:31:44.722601 kernel: Performance Events: Skylake events, core PMU driver. Jun 25 16:31:44.722606 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jun 25 16:31:44.722611 kernel: core: CPUID marked event: 'instructions' unavailable Jun 25 16:31:44.722617 kernel: core: CPUID marked event: 'bus cycles' unavailable Jun 25 16:31:44.722622 kernel: core: CPUID marked event: 'cache references' unavailable Jun 25 16:31:44.722627 kernel: core: CPUID marked event: 'cache misses' unavailable Jun 25 16:31:44.722632 kernel: core: CPUID marked event: 'branch instructions' unavailable Jun 25 16:31:44.722637 kernel: core: CPUID marked event: 'branch misses' unavailable Jun 25 16:31:44.722643 kernel: ... version: 1 Jun 25 16:31:44.722648 kernel: ... bit width: 48 Jun 25 16:31:44.722654 kernel: ... generic registers: 4 Jun 25 16:31:44.722659 kernel: ... value mask: 0000ffffffffffff Jun 25 16:31:44.722664 kernel: ... max period: 000000007fffffff Jun 25 16:31:44.722669 kernel: ... fixed-purpose events: 0 Jun 25 16:31:44.722674 kernel: ... event mask: 000000000000000f Jun 25 16:31:44.722679 kernel: signal: max sigframe size: 1776 Jun 25 16:31:44.722685 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:31:44.722691 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:31:44.722696 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:31:44.722702 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:31:44.722707 kernel: x86: Booting SMP configuration: Jun 25 16:31:44.722712 kernel: .... node #0, CPUs: #1 Jun 25 16:31:44.722717 kernel: Disabled fast string operations Jun 25 16:31:44.722722 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jun 25 16:31:44.722728 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jun 25 16:31:44.722733 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:31:44.722738 kernel: smpboot: Max logical packages: 128 Jun 25 16:31:44.722745 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jun 25 16:31:44.722750 kernel: devtmpfs: initialized Jun 25 16:31:44.722755 kernel: x86/mm: Memory block size: 128MB Jun 25 16:31:44.722760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jun 25 16:31:44.722766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:31:44.722771 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jun 25 16:31:44.722776 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:31:44.722781 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:31:44.722787 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:31:44.722793 kernel: audit: type=2000 audit(1719333103.062:1): state=initialized audit_enabled=0 res=1 Jun 25 16:31:44.722798 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:31:44.722803 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:31:44.722808 kernel: cpuidle: using governor menu Jun 25 16:31:44.722814 kernel: Simple Boot Flag at 0x36 set to 0x80 Jun 25 16:31:44.722819 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:31:44.722824 kernel: dca service started, version 1.12.1 Jun 25 16:31:44.722830 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jun 25 16:31:44.722836 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jun 25 16:31:44.722842 kernel: PCI: Using configuration type 1 for base access Jun 25 16:31:44.722847 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:31:44.722853 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:31:44.722858 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:31:44.722863 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:31:44.722869 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:31:44.722874 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:31:44.722879 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:31:44.722884 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:31:44.722890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:31:44.722896 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:31:44.722901 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jun 25 16:31:44.722906 kernel: ACPI: Interpreter enabled Jun 25 16:31:44.722911 kernel: ACPI: PM: (supports S0 S1 S5) Jun 25 16:31:44.722916 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:31:44.722922 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:31:44.722927 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:31:44.722932 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jun 25 16:31:44.722938 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jun 25 16:31:44.723011 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:31:44.723060 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jun 25 16:31:44.723121 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jun 25 16:31:44.723129 kernel: PCI host bridge to bus 0000:00 Jun 25 16:31:44.723179 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:31:44.723224 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jun 25 16:31:44.723265 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 25 16:31:44.723305 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:31:44.723344 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jun 25 16:31:44.723384 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jun 25 16:31:44.723437 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jun 25 16:31:44.723488 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jun 25 16:31:44.723542 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jun 25 16:31:44.723594 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jun 25 16:31:44.723640 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jun 25 16:31:44.723686 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:31:44.723731 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:31:44.723777 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:31:44.723824 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:31:44.723876 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jun 25 16:31:44.723923 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jun 25 16:31:44.723969 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jun 25 16:31:44.724016 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jun 25 16:31:44.724063 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jun 25 16:31:44.724122 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jun 25 16:31:44.724175 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jun 25 16:31:44.724222 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jun 25 16:31:44.724268 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jun 25 16:31:44.724313 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jun 25 16:31:44.724358 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jun 25 16:31:44.724403 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:31:44.724454 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jun 25 16:31:44.724507 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.724553 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.724602 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.724650 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.724701 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.724747 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.724798 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.724844 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.724894 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.724940 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.724989 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.725036 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.725096 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.725148 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.725200 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.725246 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.725295 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.725349 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.727650 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.727701 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.727752 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.727799 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.727849 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.727897 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.727944 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.727989 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728038 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728082 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728145 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728191 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728244 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728289 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728337 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728383 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728432 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728477 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728532 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728577 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728625 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728670 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728717 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728762 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728813 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728858 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.728907 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.728952 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729001 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729046 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729110 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729158 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729207 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729252 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729301 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729346 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729393 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729441 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729489 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729560 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729624 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729669 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729718 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729765 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729813 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:31:44.729857 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.729908 kernel: pci_bus 0000:01: extended config space not accessible Jun 25 16:31:44.729955 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 25 16:31:44.730001 kernel: pci_bus 0000:02: extended config space not accessible Jun 25 16:31:44.730011 kernel: acpiphp: Slot [32] registered Jun 25 16:31:44.730017 kernel: acpiphp: Slot [33] registered Jun 25 16:31:44.730022 kernel: acpiphp: Slot [34] registered Jun 25 16:31:44.730027 kernel: acpiphp: Slot [35] registered Jun 25 16:31:44.730033 kernel: acpiphp: Slot [36] registered Jun 25 16:31:44.730038 kernel: acpiphp: Slot [37] registered Jun 25 16:31:44.730043 kernel: acpiphp: Slot [38] registered Jun 25 16:31:44.730048 kernel: acpiphp: Slot [39] registered Jun 25 16:31:44.730053 kernel: acpiphp: Slot [40] registered Jun 25 16:31:44.730059 kernel: acpiphp: Slot [41] registered Jun 25 16:31:44.730065 kernel: acpiphp: Slot [42] registered Jun 25 16:31:44.730070 kernel: acpiphp: Slot [43] registered Jun 25 16:31:44.730075 kernel: acpiphp: Slot [44] registered Jun 25 16:31:44.730080 kernel: acpiphp: Slot [45] registered Jun 25 16:31:44.730091 kernel: acpiphp: Slot [46] registered Jun 25 16:31:44.730097 kernel: acpiphp: Slot [47] registered Jun 25 16:31:44.730102 kernel: acpiphp: Slot [48] registered Jun 25 16:31:44.730107 kernel: acpiphp: Slot [49] registered Jun 25 16:31:44.730113 kernel: acpiphp: Slot [50] registered Jun 25 16:31:44.730119 kernel: acpiphp: Slot [51] registered Jun 25 16:31:44.730125 kernel: acpiphp: Slot [52] registered Jun 25 16:31:44.730130 kernel: acpiphp: Slot [53] registered Jun 25 16:31:44.730135 kernel: acpiphp: Slot [54] registered Jun 25 16:31:44.730140 kernel: acpiphp: Slot [55] registered Jun 25 16:31:44.730145 kernel: acpiphp: Slot [56] registered Jun 25 16:31:44.730150 kernel: acpiphp: Slot [57] registered Jun 25 16:31:44.730155 kernel: acpiphp: Slot [58] registered Jun 25 16:31:44.730160 kernel: acpiphp: Slot [59] registered Jun 25 16:31:44.730167 kernel: acpiphp: Slot [60] registered Jun 25 16:31:44.730172 kernel: acpiphp: Slot [61] registered Jun 25 16:31:44.730177 kernel: acpiphp: Slot [62] registered Jun 25 16:31:44.730182 kernel: acpiphp: Slot [63] registered Jun 25 16:31:44.730233 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 25 16:31:44.730279 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 25 16:31:44.730323 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 25 16:31:44.730368 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:31:44.730412 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jun 25 16:31:44.730459 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jun 25 16:31:44.730503 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jun 25 16:31:44.730547 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jun 25 16:31:44.730590 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jun 25 16:31:44.730640 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jun 25 16:31:44.730687 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jun 25 16:31:44.730733 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jun 25 16:31:44.730782 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jun 25 16:31:44.730828 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jun 25 16:31:44.730874 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 25 16:31:44.730918 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 25 16:31:44.730963 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 25 16:31:44.731007 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 25 16:31:44.731054 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 25 16:31:44.731119 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 25 16:31:44.731165 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 25 16:31:44.731210 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:31:44.731275 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 25 16:31:44.731320 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 25 16:31:44.731364 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 25 16:31:44.731410 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:31:44.731457 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 25 16:31:44.731505 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 25 16:31:44.731555 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:31:44.731601 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 25 16:31:44.731647 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 25 16:31:44.731692 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:31:44.731741 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 25 16:31:44.731786 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 25 16:31:44.731832 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:31:44.731877 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 25 16:31:44.731923 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 25 16:31:44.731968 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:31:44.732014 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 25 16:31:44.732062 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 25 16:31:44.732185 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:31:44.732237 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jun 25 16:31:44.732285 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jun 25 16:31:44.732332 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jun 25 16:31:44.732378 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jun 25 16:31:44.732425 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jun 25 16:31:44.732472 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jun 25 16:31:44.732526 kernel: pci 0000:0b:00.0: supports D1 D2 Jun 25 16:31:44.732573 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 16:31:44.732619 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 25 16:31:44.732665 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 25 16:31:44.732711 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 25 16:31:44.732756 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 25 16:31:44.732801 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 25 16:31:44.732849 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 25 16:31:44.732911 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 25 16:31:44.732955 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:31:44.733001 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 25 16:31:44.733045 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 25 16:31:44.733095 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 25 16:31:44.733141 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:31:44.733185 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 25 16:31:44.733233 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 25 16:31:44.733277 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:31:44.733322 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 25 16:31:44.733367 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 25 16:31:44.733411 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:31:44.733456 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 25 16:31:44.733501 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 25 16:31:44.733545 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:31:44.733593 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 25 16:31:44.733637 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 25 16:31:44.733681 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:31:44.733726 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 25 16:31:44.733769 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 25 16:31:44.733813 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:31:44.733858 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 25 16:31:44.733903 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 25 16:31:44.733949 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 25 16:31:44.733994 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:31:44.734039 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 25 16:31:44.734084 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 25 16:31:44.734200 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 25 16:31:44.734244 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:31:44.734289 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 25 16:31:44.734336 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 25 16:31:44.734380 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 25 16:31:44.734423 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:31:44.734468 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 25 16:31:44.734512 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 25 16:31:44.734592 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:31:44.734637 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 25 16:31:44.734681 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 25 16:31:44.734726 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:31:44.734771 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 25 16:31:44.734815 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 25 16:31:44.734860 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:31:44.734905 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 25 16:31:44.734949 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 25 16:31:44.734993 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:31:44.735038 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 25 16:31:44.735084 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 25 16:31:44.735142 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:31:44.735187 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 25 16:31:44.735232 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 25 16:31:44.735276 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 25 16:31:44.735322 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:31:44.735366 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 25 16:31:44.735411 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 25 16:31:44.735458 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 25 16:31:44.735503 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:31:44.735552 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 25 16:31:44.735597 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 25 16:31:44.735641 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:31:44.735685 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 25 16:31:44.735730 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 25 16:31:44.735774 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:31:44.735820 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 25 16:31:44.735865 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 25 16:31:44.735910 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:31:44.735955 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 25 16:31:44.735999 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 25 16:31:44.736043 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:31:44.736096 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 25 16:31:44.736142 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 25 16:31:44.736188 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:31:44.736234 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 25 16:31:44.736278 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 25 16:31:44.736323 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:31:44.736330 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jun 25 16:31:44.736336 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jun 25 16:31:44.736341 kernel: ACPI: PCI: Interrupt link LNKB disabled Jun 25 16:31:44.736347 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:31:44.736354 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jun 25 16:31:44.736359 kernel: iommu: Default domain type: Translated Jun 25 16:31:44.736364 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:31:44.736369 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:31:44.736375 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:31:44.736380 kernel: PTP clock support registered Jun 25 16:31:44.736385 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:31:44.736390 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:31:44.736396 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jun 25 16:31:44.736402 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jun 25 16:31:44.736446 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jun 25 16:31:44.736490 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jun 25 16:31:44.736534 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:31:44.736541 kernel: vgaarb: loaded Jun 25 16:31:44.736547 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jun 25 16:31:44.736552 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jun 25 16:31:44.736558 kernel: clocksource: Switched to clocksource tsc-early Jun 25 16:31:44.736563 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:31:44.736570 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:31:44.736576 kernel: pnp: PnP ACPI init Jun 25 16:31:44.736622 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jun 25 16:31:44.736663 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jun 25 16:31:44.736703 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jun 25 16:31:44.736746 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jun 25 16:31:44.736791 kernel: pnp 00:06: [dma 2] Jun 25 16:31:44.736858 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jun 25 16:31:44.736909 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jun 25 16:31:44.736950 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jun 25 16:31:44.736958 kernel: pnp: PnP ACPI: found 8 devices Jun 25 16:31:44.736963 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:31:44.736969 kernel: NET: Registered PF_INET protocol family Jun 25 16:31:44.736974 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:31:44.736979 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:31:44.736986 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:31:44.736992 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:31:44.736997 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:31:44.737003 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:31:44.737008 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:31:44.737013 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:31:44.737018 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:31:44.737023 kernel: NET: Registered PF_XDP protocol family Jun 25 16:31:44.737068 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 25 16:31:44.737155 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 25 16:31:44.737201 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 25 16:31:44.737247 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 25 16:31:44.737291 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 25 16:31:44.737335 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jun 25 16:31:44.737383 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jun 25 16:31:44.737427 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jun 25 16:31:44.737472 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jun 25 16:31:44.737516 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jun 25 16:31:44.737566 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jun 25 16:31:44.737611 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jun 25 16:31:44.737674 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jun 25 16:31:44.737720 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jun 25 16:31:44.737764 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jun 25 16:31:44.737809 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jun 25 16:31:44.737854 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jun 25 16:31:44.737899 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jun 25 16:31:44.737946 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jun 25 16:31:44.737991 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jun 25 16:31:44.738035 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jun 25 16:31:44.738081 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jun 25 16:31:44.738162 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jun 25 16:31:44.738207 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:31:44.738254 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:31:44.738298 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.738343 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.738387 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.738431 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.738475 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.738523 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.738574 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.738622 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.738666 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.738711 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.738755 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.738799 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.738843 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.738888 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.738932 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739041 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739165 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739214 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739258 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739304 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739348 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739392 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739436 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739484 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739528 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739573 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739617 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739662 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739706 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739750 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739794 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.739841 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.739909 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740021 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740067 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740120 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740354 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740401 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740446 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740494 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740539 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740583 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740627 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740672 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740716 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740760 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740805 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740850 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740898 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.740942 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.740986 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741030 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741076 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741148 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741192 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741238 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741281 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741328 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741372 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741416 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741460 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741504 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741552 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741596 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741640 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741684 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741728 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741774 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741819 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741864 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741909 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.741952 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.741997 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.742041 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.742092 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.742138 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.742185 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.742230 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.742275 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.742319 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.742364 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.742409 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.742453 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jun 25 16:31:44.742497 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:31:44.742575 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 25 16:31:44.742637 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jun 25 16:31:44.742686 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 25 16:31:44.742730 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 25 16:31:44.742774 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:31:44.742824 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jun 25 16:31:44.742870 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 25 16:31:44.742916 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 25 16:31:44.742962 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 25 16:31:44.743008 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:31:44.743056 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 25 16:31:44.743109 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 25 16:31:44.743154 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 25 16:31:44.743199 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:31:44.743245 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 25 16:31:44.743290 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 25 16:31:44.743335 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 25 16:31:44.743380 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:31:44.743425 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 25 16:31:44.743472 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 25 16:31:44.743517 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:31:44.743562 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 25 16:31:44.743608 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 25 16:31:44.743653 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:31:44.743700 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 25 16:31:44.743749 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 25 16:31:44.743794 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:31:44.743858 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 25 16:31:44.743906 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 25 16:31:44.743952 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:31:44.743998 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 25 16:31:44.744080 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 25 16:31:44.744149 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:31:44.744200 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jun 25 16:31:44.744249 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 25 16:31:44.744294 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 25 16:31:44.744339 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 25 16:31:44.744382 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:31:44.744429 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 25 16:31:44.744473 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 25 16:31:44.744517 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 25 16:31:44.744597 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:31:44.744642 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 25 16:31:44.744687 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 25 16:31:44.744734 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 25 16:31:44.744779 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:31:44.744823 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 25 16:31:44.744868 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 25 16:31:44.744912 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:31:44.744956 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 25 16:31:44.745000 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 25 16:31:44.745044 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:31:44.745097 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 25 16:31:44.745149 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 25 16:31:44.745194 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:31:44.745239 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 25 16:31:44.745283 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 25 16:31:44.745327 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:31:44.745372 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 25 16:31:44.745416 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 25 16:31:44.745460 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:31:44.745504 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 25 16:31:44.745549 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 25 16:31:44.745596 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 25 16:31:44.745641 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:31:44.745686 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 25 16:31:44.745731 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 25 16:31:44.745774 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 25 16:31:44.745818 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:31:44.745864 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 25 16:31:44.745909 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 25 16:31:44.745953 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 25 16:31:44.746000 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:31:44.746044 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 25 16:31:44.746174 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 25 16:31:44.746224 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:31:44.746302 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 25 16:31:44.746345 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 25 16:31:44.746390 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:31:44.746434 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 25 16:31:44.746478 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 25 16:31:44.746525 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:31:44.746573 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 25 16:31:44.746618 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 25 16:31:44.746662 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:31:44.746705 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 25 16:31:44.746749 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 25 16:31:44.746792 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:31:44.746837 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 25 16:31:44.746882 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 25 16:31:44.746926 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 25 16:31:44.746972 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:31:44.747016 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 25 16:31:44.747060 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 25 16:31:44.747121 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 25 16:31:44.747170 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:31:44.749843 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 25 16:31:44.749902 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 25 16:31:44.749950 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:31:44.749996 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 25 16:31:44.750041 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 25 16:31:44.750095 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:31:44.750194 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 25 16:31:44.750242 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 25 16:31:44.750288 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:31:44.750333 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 25 16:31:44.750377 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 25 16:31:44.750422 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:31:44.750467 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 25 16:31:44.750511 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 25 16:31:44.750560 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:31:44.750605 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 25 16:31:44.750649 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 25 16:31:44.750693 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:31:44.750736 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jun 25 16:31:44.750777 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 25 16:31:44.750816 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 25 16:31:44.750856 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jun 25 16:31:44.750895 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jun 25 16:31:44.750940 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jun 25 16:31:44.750982 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jun 25 16:31:44.751024 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:31:44.751064 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jun 25 16:31:44.751131 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 25 16:31:44.751173 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 25 16:31:44.751214 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jun 25 16:31:44.751258 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jun 25 16:31:44.751302 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jun 25 16:31:44.751345 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jun 25 16:31:44.751385 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:31:44.751432 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jun 25 16:31:44.751473 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jun 25 16:31:44.751514 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:31:44.751562 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jun 25 16:31:44.751603 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jun 25 16:31:44.751644 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:31:44.751688 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jun 25 16:31:44.751730 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:31:44.751776 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jun 25 16:31:44.751821 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:31:44.751866 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jun 25 16:31:44.751908 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:31:44.751962 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jun 25 16:31:44.752004 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:31:44.752050 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jun 25 16:31:44.752103 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:31:44.752155 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jun 25 16:31:44.752197 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jun 25 16:31:44.752238 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:31:44.752284 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jun 25 16:31:44.752326 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jun 25 16:31:44.752370 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:31:44.752416 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jun 25 16:31:44.752458 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jun 25 16:31:44.752499 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:31:44.752585 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jun 25 16:31:44.752646 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:31:44.752691 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jun 25 16:31:44.752736 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:31:44.752782 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jun 25 16:31:44.752824 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:31:44.752869 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jun 25 16:31:44.752912 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:31:44.752957 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jun 25 16:31:44.753001 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:31:44.753047 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jun 25 16:31:44.753189 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jun 25 16:31:44.753236 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:31:44.753285 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jun 25 16:31:44.753328 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jun 25 16:31:44.753370 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:31:44.753419 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jun 25 16:31:44.753461 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jun 25 16:31:44.753504 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:31:44.753549 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jun 25 16:31:44.753591 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:31:44.753637 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jun 25 16:31:44.753682 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:31:44.753727 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jun 25 16:31:44.753770 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:31:44.753816 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jun 25 16:31:44.753858 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:31:44.753909 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jun 25 16:31:44.753953 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:31:44.753998 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jun 25 16:31:44.754041 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jun 25 16:31:44.754082 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:31:44.754145 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jun 25 16:31:44.754188 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jun 25 16:31:44.754233 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:31:44.754278 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jun 25 16:31:44.754321 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:31:44.754367 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jun 25 16:31:44.754409 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:31:44.754454 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jun 25 16:31:44.754497 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:31:44.754547 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jun 25 16:31:44.754589 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:31:44.754635 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jun 25 16:31:44.754677 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:31:44.754723 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jun 25 16:31:44.754765 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:31:44.754818 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:31:44.754826 kernel: PCI: CLS 32 bytes, default 64 Jun 25 16:31:44.754832 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:31:44.754838 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 25 16:31:44.754844 kernel: clocksource: Switched to clocksource tsc Jun 25 16:31:44.754850 kernel: Initialise system trusted keyrings Jun 25 16:31:44.754855 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:31:44.754861 kernel: Key type asymmetric registered Jun 25 16:31:44.754868 kernel: Asymmetric key parser 'x509' registered Jun 25 16:31:44.754874 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:31:44.754880 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:31:44.754886 kernel: io scheduler mq-deadline registered Jun 25 16:31:44.754891 kernel: io scheduler kyber registered Jun 25 16:31:44.754897 kernel: io scheduler bfq registered Jun 25 16:31:44.754942 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jun 25 16:31:44.754991 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.755038 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jun 25 16:31:44.755117 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.755171 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jun 25 16:31:44.755538 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.755590 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jun 25 16:31:44.755639 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.755685 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jun 25 16:31:44.755734 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.755779 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jun 25 16:31:44.755824 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.755870 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jun 25 16:31:44.755916 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.755963 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jun 25 16:31:44.756008 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756054 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jun 25 16:31:44.756154 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756201 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jun 25 16:31:44.756246 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756292 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jun 25 16:31:44.756340 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756385 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jun 25 16:31:44.756431 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756476 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jun 25 16:31:44.756521 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756604 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jun 25 16:31:44.756652 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756697 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jun 25 16:31:44.756742 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756787 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jun 25 16:31:44.756833 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756880 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jun 25 16:31:44.756925 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.756970 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jun 25 16:31:44.757014 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.757058 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jun 25 16:31:44.757110 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.757155 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jun 25 16:31:44.757204 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.757249 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jun 25 16:31:44.757293 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.757339 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jun 25 16:31:44.757403 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.757451 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jun 25 16:31:44.757498 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.757544 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jun 25 16:31:44.757590 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.757636 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jun 25 16:31:44.758001 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758054 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jun 25 16:31:44.758133 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758181 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jun 25 16:31:44.758227 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758272 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jun 25 16:31:44.758317 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758366 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jun 25 16:31:44.758411 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758457 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jun 25 16:31:44.758502 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758551 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jun 25 16:31:44.758598 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758645 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jun 25 16:31:44.758691 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:31:44.758700 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:31:44.758706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:31:44.758712 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:31:44.758718 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jun 25 16:31:44.758725 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:31:44.758730 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:31:44.758777 kernel: rtc_cmos 00:01: registered as rtc0 Jun 25 16:31:44.758819 kernel: rtc_cmos 00:01: setting system clock to 2024-06-25T16:31:44 UTC (1719333104) Jun 25 16:31:44.758859 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jun 25 16:31:44.758867 kernel: fail to initialize ptp_kvm Jun 25 16:31:44.758873 kernel: intel_pstate: CPU model not supported Jun 25 16:31:44.758879 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:31:44.758886 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:31:44.758892 kernel: Segment Routing with IPv6 Jun 25 16:31:44.758898 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:31:44.758903 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:31:44.758909 kernel: Key type dns_resolver registered Jun 25 16:31:44.758915 kernel: IPI shorthand broadcast: enabled Jun 25 16:31:44.758921 kernel: sched_clock: Marking stable (865234001, 221581119)->(1142516015, -55700895) Jun 25 16:31:44.758927 kernel: registered taskstats version 1 Jun 25 16:31:44.758932 kernel: Loading compiled-in X.509 certificates Jun 25 16:31:44.758939 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:31:44.758944 kernel: Key type .fscrypt registered Jun 25 16:31:44.758950 kernel: Key type fscrypt-provisioning registered Jun 25 16:31:44.758956 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:31:44.758962 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:31:44.758967 kernel: ima: No architecture policies found Jun 25 16:31:44.758973 kernel: clk: Disabling unused clocks Jun 25 16:31:44.758978 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:31:44.758984 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:31:44.758990 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:31:44.758996 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:31:44.759002 kernel: Run /init as init process Jun 25 16:31:44.759007 kernel: with arguments: Jun 25 16:31:44.759013 kernel: /init Jun 25 16:31:44.759019 kernel: with environment: Jun 25 16:31:44.759024 kernel: HOME=/ Jun 25 16:31:44.759029 kernel: TERM=linux Jun 25 16:31:44.759035 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:31:44.759043 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:31:44.759050 systemd[1]: Detected virtualization vmware. Jun 25 16:31:44.759056 systemd[1]: Detected architecture x86-64. Jun 25 16:31:44.759062 systemd[1]: Running in initrd. Jun 25 16:31:44.759067 systemd[1]: No hostname configured, using default hostname. Jun 25 16:31:44.759073 systemd[1]: Hostname set to . Jun 25 16:31:44.759079 systemd[1]: Initializing machine ID from random generator. Jun 25 16:31:44.759110 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:31:44.759117 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:31:44.759123 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:31:44.759129 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:31:44.759134 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:31:44.759140 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:31:44.759146 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:31:44.759152 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:31:44.759160 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:31:44.759165 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:31:44.759171 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:31:44.759177 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:31:44.759183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:31:44.759189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:31:44.759196 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:31:44.759202 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:31:44.759208 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:31:44.759214 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:31:44.759220 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:31:44.759401 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:31:44.759408 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:31:44.759414 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:31:44.759420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:31:44.759426 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:31:44.759434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:31:44.759440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:31:44.759446 kernel: audit: type=1130 audit(1719333104.739:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.759452 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:31:44.759458 kernel: audit: type=1130 audit(1719333104.746:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.759464 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:31:44.759473 systemd-journald[211]: Journal started Jun 25 16:31:44.759502 systemd-journald[211]: Runtime Journal (/run/log/journal/35b95dfe697746beb811f73ea1edf7b8) is 4.8M, max 38.7M, 33.9M free. Jun 25 16:31:44.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.739156 systemd-modules-load[212]: Inserted module 'overlay' Jun 25 16:31:44.761252 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:31:44.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.765104 kernel: audit: type=1130 audit(1719333104.760:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.767100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:31:44.768422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:31:44.771684 kernel: Bridge firewalling registered Jun 25 16:31:44.768826 systemd-modules-load[212]: Inserted module 'br_netfilter' Jun 25 16:31:44.772335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:31:44.772711 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:31:44.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.775756 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:31:44.776096 kernel: audit: type=1130 audit(1719333104.771:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.779096 kernel: audit: type=1130 audit(1719333104.774:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.779141 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:31:44.785126 kernel: SCSI subsystem initialized Jun 25 16:31:44.775000 audit: BPF prog-id=6 op=LOAD Jun 25 16:31:44.788127 kernel: audit: type=1334 audit(1719333104.775:7): prog-id=6 op=LOAD Jun 25 16:31:44.789507 dracut-cmdline[229]: dracut-dracut-053 Jun 25 16:31:44.792900 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:31:44.797295 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:31:44.797316 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:31:44.797327 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:31:44.802101 systemd-modules-load[212]: Inserted module 'dm_multipath' Jun 25 16:31:44.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.802649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:31:44.806093 kernel: audit: type=1130 audit(1719333104.801:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.806127 systemd-resolved[230]: Positive Trust Anchors: Jun 25 16:31:44.806198 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:31:44.806219 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:31:44.807889 systemd-resolved[230]: Defaulting to hostname 'linux'. Jun 25 16:31:44.808189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:31:44.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.810222 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:31:44.810357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:31:44.813107 kernel: audit: type=1130 audit(1719333104.809:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.814174 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:31:44.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.817110 kernel: audit: type=1130 audit(1719333104.813:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.835101 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:31:44.843097 kernel: iscsi: registered transport (tcp) Jun 25 16:31:44.857429 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:31:44.857460 kernel: QLogic iSCSI HBA Driver Jun 25 16:31:44.876237 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:31:44.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:44.887191 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:31:44.933109 kernel: raid6: avx2x4 gen() 45780 MB/s Jun 25 16:31:44.950097 kernel: raid6: avx2x2 gen() 52508 MB/s Jun 25 16:31:44.967336 kernel: raid6: avx2x1 gen() 44283 MB/s Jun 25 16:31:44.967354 kernel: raid6: using algorithm avx2x2 gen() 52508 MB/s Jun 25 16:31:44.985292 kernel: raid6: .... xor() 30531 MB/s, rmw enabled Jun 25 16:31:44.985327 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:31:44.988103 kernel: xor: automatically using best checksumming function avx Jun 25 16:31:45.082107 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:31:45.086880 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:31:45.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:45.086000 audit: BPF prog-id=7 op=LOAD Jun 25 16:31:45.086000 audit: BPF prog-id=8 op=LOAD Jun 25 16:31:45.093198 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:31:45.100541 systemd-udevd[411]: Using default interface naming scheme 'v252'. Jun 25 16:31:45.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:45.103228 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:31:45.103789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:31:45.111421 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jun 25 16:31:45.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:45.126920 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:31:45.135233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:31:45.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:45.198787 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:31:45.244388 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jun 25 16:31:45.244427 kernel: vmw_pvscsi: using 64bit dma Jun 25 16:31:45.245441 kernel: vmw_pvscsi: max_id: 16 Jun 25 16:31:45.245459 kernel: vmw_pvscsi: setting ring_pages to 8 Jun 25 16:31:45.251120 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jun 25 16:31:45.251137 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jun 25 16:31:45.253676 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jun 25 16:31:45.255414 kernel: vmw_pvscsi: enabling reqCallThreshold Jun 25 16:31:45.255429 kernel: vmw_pvscsi: driver-based request coalescing enabled Jun 25 16:31:45.255437 kernel: vmw_pvscsi: using MSI-X Jun 25 16:31:45.256717 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jun 25 16:31:45.259480 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jun 25 16:31:45.260688 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jun 25 16:31:45.277074 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:31:45.282099 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:31:45.284099 kernel: AES CTR mode by8 optimization enabled Jun 25 16:31:45.294100 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jun 25 16:31:45.297099 kernel: libata version 3.00 loaded. Jun 25 16:31:45.299099 kernel: ata_piix 0000:00:07.1: version 2.13 Jun 25 16:31:45.311874 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jun 25 16:31:45.314267 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 16:31:45.314334 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jun 25 16:31:45.314394 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jun 25 16:31:45.314450 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jun 25 16:31:45.314506 kernel: scsi host1: ata_piix Jun 25 16:31:45.314568 kernel: scsi host2: ata_piix Jun 25 16:31:45.314623 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jun 25 16:31:45.314630 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jun 25 16:31:45.314637 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:31:45.314644 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 16:31:45.480119 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jun 25 16:31:45.484111 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jun 25 16:31:45.517136 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jun 25 16:31:45.540458 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:31:45.540474 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (461) Jun 25 16:31:45.540483 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 25 16:31:45.538360 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jun 25 16:31:45.541136 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jun 25 16:31:45.545098 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (458) Jun 25 16:31:45.547478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 25 16:31:45.550428 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jun 25 16:31:45.550695 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jun 25 16:31:45.560270 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:31:45.630120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:31:45.637108 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:31:46.636947 disk-uuid[555]: The operation has completed successfully. Jun 25 16:31:46.637173 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:31:46.672407 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:31:46.672677 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:31:46.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:46.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:46.678326 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:31:46.682451 sh[572]: Success Jun 25 16:31:46.699101 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:31:46.766157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:31:46.772756 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:31:46.774151 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:31:46.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:46.805915 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:31:46.805948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:46.805959 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:31:46.807390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:31:46.808460 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:31:46.831108 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 16:31:46.833920 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:31:46.849255 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jun 25 16:31:46.850010 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:31:46.869458 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:46.869505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:46.869515 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:31:46.878105 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:31:46.888258 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:31:46.890208 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:46.891558 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:31:46.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:46.893205 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:31:46.935280 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 25 16:31:46.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:46.939490 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:31:46.987620 ignition[631]: Ignition 2.15.0 Jun 25 16:31:46.987878 ignition[631]: Stage: fetch-offline Jun 25 16:31:46.988012 ignition[631]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:46.988151 ignition[631]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:31:46.988338 ignition[631]: parsed url from cmdline: "" Jun 25 16:31:46.988367 ignition[631]: no config URL provided Jun 25 16:31:46.988467 ignition[631]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:31:46.988592 ignition[631]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:31:46.989055 ignition[631]: config successfully fetched Jun 25 16:31:46.989114 ignition[631]: parsing config with SHA512: e60778d925c4fd70b8fe05aa67bd8f20d48db5e0ad1b23ec00447de11430119ed8368342c0d50aaeb6e7aa1c2169abbe96cddd9a1e15fc9a626f4a7537f21da7 Jun 25 16:31:46.991539 unknown[631]: fetched base config from "system" Jun 25 16:31:46.991685 unknown[631]: fetched user config from "vmware" Jun 25 16:31:46.992044 ignition[631]: fetch-offline: fetch-offline passed Jun 25 16:31:46.992219 ignition[631]: Ignition finished successfully Jun 25 16:31:46.993022 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:31:46.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.011009 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:31:47.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.010000 audit: BPF prog-id=9 op=LOAD Jun 25 16:31:47.016192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:31:47.028338 systemd-networkd[762]: lo: Link UP Jun 25 16:31:47.028344 systemd-networkd[762]: lo: Gained carrier Jun 25 16:31:47.028597 systemd-networkd[762]: Enumeration completed Jun 25 16:31:47.028784 systemd-networkd[762]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jun 25 16:31:47.030686 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 25 16:31:47.030768 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 25 16:31:47.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.029286 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:31:47.029434 systemd[1]: Reached target network.target - Network. Jun 25 16:31:47.031462 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 16:31:47.031738 systemd-networkd[762]: ens192: Link UP Jun 25 16:31:47.031740 systemd-networkd[762]: ens192: Gained carrier Jun 25 16:31:47.032599 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:31:47.033074 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:31:47.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.036363 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:31:47.037023 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:31:47.038884 iscsid[772]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:31:47.038884 iscsid[772]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:31:47.038884 iscsid[772]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:31:47.038884 iscsid[772]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:31:47.038884 iscsid[772]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:31:47.039853 iscsid[772]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:31:47.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.039732 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:31:47.040502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:31:47.045169 ignition[764]: Ignition 2.15.0 Jun 25 16:31:47.045457 ignition[764]: Stage: kargs Jun 25 16:31:47.045677 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:47.045797 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:31:47.047312 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:31:47.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.047507 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:31:47.047631 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:31:47.047846 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:31:47.048150 ignition[764]: kargs: kargs passed Jun 25 16:31:47.048284 ignition[764]: Ignition finished successfully Jun 25 16:31:47.048515 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:31:47.050643 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:31:47.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.055390 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:31:47.055764 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:31:47.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.062834 ignition[784]: Ignition 2.15.0 Jun 25 16:31:47.062845 ignition[784]: Stage: disks Jun 25 16:31:47.062929 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:47.062936 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:31:47.063502 ignition[784]: disks: disks passed Jun 25 16:31:47.063531 ignition[784]: Ignition finished successfully Jun 25 16:31:47.064020 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:31:47.064219 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:31:47.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.064341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:31:47.064532 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:31:47.064726 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:31:47.064907 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:31:47.067204 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:31:47.077325 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 16:31:47.078658 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:31:47.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.082175 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:31:47.135104 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:31:47.135223 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:31:47.135379 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:31:47.145167 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:31:47.146007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:31:47.146542 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:31:47.146757 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:31:47.147028 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:31:47.148820 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:31:47.149533 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:31:47.163105 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (803) Jun 25 16:31:47.176492 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:47.176525 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:47.176533 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:31:47.210159 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:31:47.214116 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:31:47.254521 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:31:47.258108 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:31:47.261821 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:31:47.264851 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:31:47.364856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:31:47.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.371188 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:31:47.371736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:31:47.377201 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:47.384882 ignition[914]: INFO : Ignition 2.15.0 Jun 25 16:31:47.384882 ignition[914]: INFO : Stage: mount Jun 25 16:31:47.385252 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:47.385252 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:31:47.386390 ignition[914]: INFO : mount: mount passed Jun 25 16:31:47.386390 ignition[914]: INFO : Ignition finished successfully Jun 25 16:31:47.387009 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:31:47.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.396169 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:31:47.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:47.405909 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:31:47.794308 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:31:47.801422 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:31:47.846105 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (924) Jun 25 16:31:47.848129 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:31:47.848157 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:31:47.850204 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:31:47.854103 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:31:47.854987 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:31:47.868613 ignition[942]: INFO : Ignition 2.15.0 Jun 25 16:31:47.868613 ignition[942]: INFO : Stage: files Jun 25 16:31:47.869027 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:47.869027 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:31:47.869465 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:31:47.870194 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:31:47.870194 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:31:47.872340 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:31:47.872602 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:31:47.872959 unknown[942]: wrote ssh authorized keys file for user: core Jun 25 16:31:47.873202 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:31:47.875019 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:31:47.875019 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:31:47.892144 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:31:47.951560 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:31:47.951842 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:31:47.952175 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:31:47.952370 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:31:47.952639 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:31:47.952825 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:31:47.953068 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:31:47.953275 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:31:47.953514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:31:47.953796 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:31:47.954113 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:31:47.954303 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:47.954626 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:47.954853 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:47.955123 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:31:48.434839 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:31:48.570318 systemd-networkd[762]: ens192: Gained IPv6LL Jun 25 16:31:48.599142 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:31:48.599421 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 25 16:31:48.599421 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 25 16:31:48.599421 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 16:31:48.601506 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:31:48.601744 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:31:48.601744 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 16:31:48.601744 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 16:31:48.601744 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:31:48.602589 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:31:48.602589 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 16:31:48.602589 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 16:31:48.602589 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:31:48.631010 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:31:48.631244 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 16:31:48.631244 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:31:48.631244 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:31:48.631244 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:31:48.631878 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:31:48.631878 ignition[942]: INFO : files: files passed Jun 25 16:31:48.631878 ignition[942]: INFO : Ignition finished successfully Jun 25 16:31:48.631911 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:31:48.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.638238 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:31:48.638749 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:31:48.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.640574 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:31:48.640627 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:31:48.644109 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:31:48.644109 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:31:48.645005 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:31:48.645743 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:31:48.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.645904 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:31:48.660239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:31:48.668068 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:31:48.668142 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:31:48.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.668420 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:31:48.668547 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:31:48.668756 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:31:48.669186 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:31:48.675824 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:31:48.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.676352 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:31:48.681684 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:31:48.681994 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:31:48.682308 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:31:48.682578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:31:48.682645 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:31:48.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.683172 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:31:48.683449 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:31:48.683713 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:31:48.683997 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:31:48.684297 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:31:48.684579 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:31:48.684854 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:31:48.685339 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:31:48.685625 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:31:48.685897 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:31:48.686204 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:31:48.686434 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:31:48.686501 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:31:48.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.686974 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:31:48.687260 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:31:48.687324 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:31:48.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.687777 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:31:48.687843 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:31:48.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.688328 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:31:48.688568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:31:48.688753 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:31:48.689070 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:31:48.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.689258 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:31:48.689390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:31:48.689458 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:31:48.690133 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:31:48.690206 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:31:48.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.699336 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:31:48.704938 iscsid[772]: iscsid shutting down. Jun 25 16:31:48.700148 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:31:48.700813 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:31:48.700948 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:31:48.701039 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:31:48.701265 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:31:48.701334 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:31:48.702644 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:31:48.702715 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:31:48.703169 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:31:48.703222 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:31:48.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.703763 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:31:48.704584 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:31:48.704645 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:31:48.708801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:31:48.708853 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:31:48.709851 ignition[988]: INFO : Ignition 2.15.0 Jun 25 16:31:48.709851 ignition[988]: INFO : Stage: umount Jun 25 16:31:48.709851 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:31:48.709851 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:31:48.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.710650 ignition[988]: INFO : umount: umount passed Jun 25 16:31:48.710769 ignition[988]: INFO : Ignition finished successfully Jun 25 16:31:48.711288 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:31:48.711455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:31:48.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.712157 systemd[1]: Stopped target network.target - Network. Jun 25 16:31:48.712356 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:31:48.712484 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:31:48.712691 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:31:48.712824 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:31:48.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.713070 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:31:48.713208 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:31:48.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.713448 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:31:48.713577 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:31:48.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.714030 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:31:48.714327 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:31:48.715936 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:31:48.716123 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:31:48.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.716484 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:31:48.716622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:31:48.721398 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:31:48.721606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:31:48.721751 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:31:48.720000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:31:48.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.722055 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jun 25 16:31:48.722905 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 25 16:31:48.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.723221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:31:48.723355 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:31:48.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.723644 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:31:48.723782 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:31:48.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.724082 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:31:48.726578 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:31:48.726862 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:31:48.726919 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:31:48.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.727571 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:31:48.727598 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:31:48.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.728717 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:31:48.728000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:31:48.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.729881 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:31:48.729935 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:31:48.730834 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:31:48.731017 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:31:48.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.731371 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:31:48.731514 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:31:48.731738 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:31:48.732411 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:31:48.732637 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:31:48.732775 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:31:48.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.733108 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:31:48.733312 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:31:48.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.733559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:31:48.736705 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:31:48.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.737503 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:31:48.740476 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:31:48.740491 kernel: audit: type=1131 audit(1719333108.735:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.740643 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:31:48.740806 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:31:48.743566 kernel: audit: type=1131 audit(1719333108.739:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.743643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:31:48.746301 kernel: audit: type=1131 audit(1719333108.742:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.743676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:31:48.743819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:31:48.743842 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:31:48.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.746868 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:31:48.749326 kernel: audit: type=1131 audit(1719333108.745:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.746913 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:31:48.747353 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:31:48.747411 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:31:48.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.754665 kernel: audit: type=1130 audit(1719333108.748:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.754682 kernel: audit: type=1131 audit(1719333108.748:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.791339 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:31:48.791398 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:31:48.794011 kernel: audit: type=1131 audit(1719333108.790:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.791664 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:31:48.794067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:31:48.794106 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:31:48.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.798115 kernel: audit: type=1131 audit(1719333108.793:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:48.799363 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:31:48.803505 systemd[1]: Switching root. Jun 25 16:31:48.820329 systemd-journald[211]: Journal stopped Jun 25 16:31:49.745392 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Jun 25 16:31:49.745410 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:31:49.745419 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:31:49.745425 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:31:49.745430 kernel: SELinux: policy capability open_perms=1 Jun 25 16:31:49.745435 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:31:49.745442 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:31:49.745448 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:31:49.745454 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:31:49.745459 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:31:49.745464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:31:49.745470 kernel: audit: type=1403 audit(1719333109.034:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:31:49.745476 systemd[1]: Successfully loaded SELinux policy in 87.301ms. Jun 25 16:31:49.745484 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.521ms. Jun 25 16:31:49.745492 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:31:49.745499 systemd[1]: Detected virtualization vmware. Jun 25 16:31:49.745505 systemd[1]: Detected architecture x86-64. Jun 25 16:31:49.745512 systemd[1]: Detected first boot. Jun 25 16:31:49.745518 systemd[1]: Initializing machine ID from random generator. Jun 25 16:31:49.745524 kernel: audit: type=1334 audit(1719333109.194:82): prog-id=10 op=LOAD Jun 25 16:31:49.745530 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:31:49.745537 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:31:49.745544 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jun 25 16:31:49.745551 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:31:49.745557 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:31:49.745565 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:31:49.745571 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:31:49.745578 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:31:49.745584 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:31:49.745592 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:31:49.745604 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:31:49.745616 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:31:49.745627 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:31:49.745634 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:31:49.745641 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:31:49.745648 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:31:49.745654 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:31:49.745660 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:31:49.745667 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:31:49.745673 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:31:49.745681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:31:49.745689 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:31:49.745696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:31:49.745702 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:31:49.745709 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:31:49.745715 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:31:49.745722 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:31:49.745729 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:31:49.745736 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:31:49.745743 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:31:49.745750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:31:49.745756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:31:49.745763 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:31:49.745770 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:31:49.745777 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:31:49.745784 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:31:49.745791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:49.745798 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:31:49.745804 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:31:49.745812 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:31:49.745819 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:31:49.745885 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jun 25 16:31:49.745896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:31:49.745902 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:31:49.745909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:31:49.745916 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:31:49.745923 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:31:49.745930 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:31:49.745937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:31:49.745945 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:31:49.745953 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:31:49.745959 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:31:49.745966 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:31:49.745973 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:31:49.745980 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:31:49.745987 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:31:49.745994 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:31:49.746002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:31:49.746009 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:31:49.746015 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:31:49.746023 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:31:49.746030 systemd[1]: Stopped verity-setup.service. Jun 25 16:31:49.746037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:49.746046 systemd-journald[1097]: Journal started Jun 25 16:31:49.746074 systemd-journald[1097]: Runtime Journal (/run/log/journal/6b2d546aa3474ea0bd4accf175f8ae6e) is 4.8M, max 38.7M, 33.9M free. Jun 25 16:31:49.034000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:31:49.194000 audit: BPF prog-id=10 op=LOAD Jun 25 16:31:49.194000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:31:49.194000 audit: BPF prog-id=11 op=LOAD Jun 25 16:31:49.194000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:31:49.658000 audit: BPF prog-id=12 op=LOAD Jun 25 16:31:49.658000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:31:49.658000 audit: BPF prog-id=13 op=LOAD Jun 25 16:31:49.658000 audit: BPF prog-id=14 op=LOAD Jun 25 16:31:49.658000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:31:49.658000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:31:49.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.663000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:31:49.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.723000 audit: BPF prog-id=15 op=LOAD Jun 25 16:31:49.723000 audit: BPF prog-id=16 op=LOAD Jun 25 16:31:49.723000 audit: BPF prog-id=17 op=LOAD Jun 25 16:31:49.723000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:31:49.723000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:31:49.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.742000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:31:49.742000 audit[1097]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff750aabf0 a2=4000 a3=7fff750aac8c items=0 ppid=1 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:49.742000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:31:49.650743 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:31:49.650751 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 16:31:49.660148 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:31:49.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.748458 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:31:49.748627 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:31:49.749827 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:31:49.748848 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:31:49.748975 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:31:49.749117 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:31:49.750266 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:31:49.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.753941 jq[1084]: true Jun 25 16:31:49.750482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:31:49.750717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:31:49.750790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:31:49.751007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:31:49.751077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:31:49.756360 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:31:49.756446 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:31:49.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.757459 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:31:49.759556 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:31:49.762729 jq[1105]: true Jun 25 16:31:49.766124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:31:49.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.766299 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:31:49.770183 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:31:49.771301 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:31:49.771417 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:31:49.772313 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:31:49.772645 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:31:49.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.773344 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:31:49.774095 kernel: fuse: init (API version 7.37) Jun 25 16:31:49.774127 kernel: loop: module loaded Jun 25 16:31:49.775158 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:31:49.775254 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:31:49.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.775527 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:31:49.775601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:31:49.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.776817 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:31:49.776949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:31:49.778861 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:31:49.782263 systemd-journald[1097]: Time spent on flushing to /var/log/journal/6b2d546aa3474ea0bd4accf175f8ae6e is 51.375ms for 1956 entries. Jun 25 16:31:49.782263 systemd-journald[1097]: System Journal (/var/log/journal/6b2d546aa3474ea0bd4accf175f8ae6e) is 8.0M, max 584.8M, 576.8M free. Jun 25 16:31:49.846594 systemd-journald[1097]: Received client request to flush runtime journal. Jun 25 16:31:49.846629 kernel: ACPI: bus type drm_connector registered Jun 25 16:31:49.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.785044 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:31:49.785217 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:31:49.790306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:31:49.791377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:31:49.817825 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:31:49.817916 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:31:49.822243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:31:49.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.847584 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:31:49.887949 ignition[1111]: Ignition 2.15.0 Jun 25 16:31:49.888354 ignition[1111]: deleting config from guestinfo properties Jun 25 16:31:49.905982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:31:49.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.932586 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:31:49.934455 ignition[1111]: Successfully deleted config Jun 25 16:31:49.937193 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:31:49.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.938346 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:31:49.941039 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jun 25 16:31:49.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.942950 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 16:31:49.959882 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:31:49.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:49.963230 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:31:49.976690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:31:49.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.475784 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:31:50.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.475000 audit: BPF prog-id=18 op=LOAD Jun 25 16:31:50.475000 audit: BPF prog-id=19 op=LOAD Jun 25 16:31:50.475000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:31:50.475000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:31:50.482255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:31:50.494277 systemd-udevd[1149]: Using default interface naming scheme 'v252'. Jun 25 16:31:50.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.554000 audit: BPF prog-id=20 op=LOAD Jun 25 16:31:50.554541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:31:50.560223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:31:50.561000 audit: BPF prog-id=21 op=LOAD Jun 25 16:31:50.561000 audit: BPF prog-id=22 op=LOAD Jun 25 16:31:50.561000 audit: BPF prog-id=23 op=LOAD Jun 25 16:31:50.562949 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:31:50.590732 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:31:50.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.595295 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:31:50.627102 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:31:50.631114 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:31:50.648172 systemd-networkd[1156]: lo: Link UP Jun 25 16:31:50.648344 systemd-networkd[1156]: lo: Gained carrier Jun 25 16:31:50.648647 systemd-networkd[1156]: Enumeration completed Jun 25 16:31:50.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.648735 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:31:50.650026 systemd-networkd[1156]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jun 25 16:31:50.650416 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:31:50.652723 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 25 16:31:50.652843 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 25 16:31:50.653655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jun 25 16:31:50.654239 systemd-networkd[1156]: ens192: Link UP Jun 25 16:31:50.654389 systemd-networkd[1156]: ens192: Gained carrier Jun 25 16:31:50.670121 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1155) Jun 25 16:31:50.705585 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1164) Jun 25 16:31:50.705633 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jun 25 16:31:50.716335 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jun 25 16:31:50.718588 kernel: Guest personality initialized and is active Jun 25 16:31:50.718611 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:31:50.718643 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 25 16:31:50.718658 kernel: Initialized host personality Jun 25 16:31:50.732289 (udev-worker)[1155]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 25 16:31:50.736482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 25 16:31:50.748525 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:31:50.765375 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:31:50.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.775278 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:31:50.783663 lvm[1186]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:31:50.816791 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:31:50.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.817030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:31:50.820279 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:31:50.823239 lvm[1187]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:31:50.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.852930 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:31:50.853185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:31:50.853318 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:31:50.853336 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:31:50.853456 systemd[1]: Reached target machines.target - Containers. Jun 25 16:31:50.856284 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:31:50.856783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:50.856826 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:50.857921 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:31:50.858864 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:31:50.860304 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:31:50.864467 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:31:50.864810 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1189 (bootctl) Jun 25 16:31:50.865904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:31:50.877226 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:31:50.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:50.879103 kernel: loop0: detected capacity change from 0 to 3000 Jun 25 16:31:51.081105 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:31:51.129106 kernel: loop1: detected capacity change from 0 to 209816 Jun 25 16:31:51.387219 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:31:51.387603 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:31:51.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:51.413106 kernel: loop2: detected capacity change from 0 to 139360 Jun 25 16:31:51.427297 systemd-fsck[1196]: fsck.fat 4.2 (2021-01-31) Jun 25 16:31:51.427297 systemd-fsck[1196]: /dev/sda1: 808 files, 120378/258078 clusters Jun 25 16:31:51.428680 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:31:51.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:51.433187 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:31:51.479234 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:31:51.490209 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:31:51.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:51.499107 kernel: loop3: detected capacity change from 0 to 80584 Jun 25 16:31:51.587104 kernel: loop4: detected capacity change from 0 to 3000 Jun 25 16:31:51.603109 kernel: loop5: detected capacity change from 0 to 209816 Jun 25 16:31:51.665109 kernel: loop6: detected capacity change from 0 to 139360 Jun 25 16:31:51.691104 kernel: loop7: detected capacity change from 0 to 80584 Jun 25 16:31:51.838545 (sd-sysext)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jun 25 16:31:51.839903 (sd-sysext)[1203]: Merged extensions into '/usr'. Jun 25 16:31:51.840856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:31:51.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:51.846240 systemd[1]: Starting ensure-sysext.service... Jun 25 16:31:51.848022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:31:51.862517 systemd[1]: Reloading. Jun 25 16:31:51.863352 systemd-tmpfiles[1205]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:31:51.864642 systemd-tmpfiles[1205]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:31:51.864909 systemd-tmpfiles[1205]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:31:51.865547 systemd-tmpfiles[1205]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:31:51.975492 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:31:51.988376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:31:52.013755 ldconfig[1188]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:31:52.025000 audit: BPF prog-id=24 op=LOAD Jun 25 16:31:52.025000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:31:52.025000 audit: BPF prog-id=25 op=LOAD Jun 25 16:31:52.025000 audit: BPF prog-id=26 op=LOAD Jun 25 16:31:52.025000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:31:52.025000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:31:52.026000 audit: BPF prog-id=27 op=LOAD Jun 25 16:31:52.026000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:31:52.026000 audit: BPF prog-id=28 op=LOAD Jun 25 16:31:52.026000 audit: BPF prog-id=29 op=LOAD Jun 25 16:31:52.026000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:31:52.026000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:31:52.026000 audit: BPF prog-id=30 op=LOAD Jun 25 16:31:52.026000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:31:52.027000 audit: BPF prog-id=31 op=LOAD Jun 25 16:31:52.027000 audit: BPF prog-id=32 op=LOAD Jun 25 16:31:52.027000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:31:52.027000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:31:52.032439 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:31:52.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.042590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:31:52.045035 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:31:52.049056 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:31:52.050447 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:31:52.050000 audit: BPF prog-id=33 op=LOAD Jun 25 16:31:52.052026 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:31:52.051000 audit: BPF prog-id=34 op=LOAD Jun 25 16:31:52.053653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:31:52.055862 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:31:52.061143 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:52.062004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:31:52.062913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:31:52.062000 audit[1287]: SYSTEM_BOOT pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.063781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:31:52.063925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:52.064002 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:52.064076 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:52.066768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:31:52.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.067384 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:52.067468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:52.067528 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:52.067589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:52.069299 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:52.072071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:31:52.072256 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:31:52.072331 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:52.072543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:31:52.074035 systemd[1]: Finished ensure-sysext.service. Jun 25 16:31:52.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.074477 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:31:52.074561 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:31:52.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.075542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:31:52.075625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:31:52.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.075774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:31:52.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.076970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:31:52.077047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:31:52.077290 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:31:52.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.084626 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:31:52.084707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:31:52.090231 systemd-networkd[1156]: ens192: Gained IPv6LL Jun 25 16:31:52.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.092564 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:31:52.114217 systemd-resolved[1285]: Positive Trust Anchors: Jun 25 16:31:52.114228 systemd-resolved[1285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:31:52.114248 systemd-resolved[1285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:31:52.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.116538 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:31:52.116704 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:31:52.125698 systemd-resolved[1285]: Defaulting to hostname 'linux'. Jun 25 16:31:52.126763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:31:52.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.126906 systemd[1]: Reached target network.target - Network. Jun 25 16:31:52.126988 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:31:52.127120 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:31:52.128679 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:31:52.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.135246 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:31:52.142954 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:31:52.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:31:52.142000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:31:52.142000 audit[1307]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa10b5e10 a2=420 a3=0 items=0 ppid=1281 pid=1307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:52.143778 augenrules[1307]: No rules Jun 25 16:31:52.142000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:31:52.144065 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:31:52.197625 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:31:52.197817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:31:52.197843 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:31:52.198009 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:31:52.198147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:31:52.198346 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:31:52.198512 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:31:52.198654 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:31:52.198771 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:31:52.198792 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:31:52.198878 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:31:52.199255 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:31:52.200300 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:31:52.204114 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:31:52.204284 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:52.204542 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:31:52.204680 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:31:52.204769 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:31:52.204879 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:31:52.204897 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:31:52.205622 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:31:52.206495 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jun 25 16:31:52.207779 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:31:52.208724 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:31:52.209600 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:31:52.210100 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:31:52.210809 jq[1317]: false Jun 25 16:31:52.214359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:31:52.215338 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:31:52.216228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:31:52.217094 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:31:52.217978 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:31:52.219129 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:31:52.221154 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:31:52.221282 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:31:52.221313 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:31:52.221651 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:31:52.223204 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:31:52.224402 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:31:52.226544 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jun 25 16:31:52.229062 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:31:52.229222 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:31:52.236123 jq[1330]: true Jun 25 16:31:52.236784 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:31:52.236887 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:31:52.245961 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jun 25 16:31:52.247370 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jun 25 16:31:52.255651 extend-filesystems[1318]: Found loop4 Jun 25 16:31:52.255943 extend-filesystems[1318]: Found loop5 Jun 25 16:31:52.256075 extend-filesystems[1318]: Found loop6 Jun 25 16:31:52.256212 extend-filesystems[1318]: Found loop7 Jun 25 16:31:52.256336 extend-filesystems[1318]: Found sda Jun 25 16:31:52.256460 extend-filesystems[1318]: Found sda1 Jun 25 16:31:52.256582 extend-filesystems[1318]: Found sda2 Jun 25 16:31:52.256705 extend-filesystems[1318]: Found sda3 Jun 25 16:31:52.256825 extend-filesystems[1318]: Found usr Jun 25 16:31:52.256947 extend-filesystems[1318]: Found sda4 Jun 25 16:31:52.257066 extend-filesystems[1318]: Found sda6 Jun 25 16:31:52.257190 extend-filesystems[1318]: Found sda7 Jun 25 16:31:52.257309 extend-filesystems[1318]: Found sda9 Jun 25 16:31:52.257425 extend-filesystems[1318]: Checking size of /dev/sda9 Jun 25 16:31:52.266141 tar[1336]: linux-amd64/helm Jun 25 16:31:52.270568 jq[1337]: true Jun 25 16:31:52.274749 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jun 25 16:31:52.275357 update_engine[1328]: I0625 16:31:52.271672 1328 main.cc:92] Flatcar Update Engine starting Jun 25 16:31:52.277208 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:31:52.277348 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:32:46.860548 systemd-resolved[1285]: Clock change detected. Flushing caches. Jun 25 16:32:46.860616 systemd-timesyncd[1286]: Contacted time server 74.6.168.73:123 (0.flatcar.pool.ntp.org). Jun 25 16:32:46.860645 systemd-timesyncd[1286]: Initial clock synchronization to Tue 2024-06-25 16:32:46.860519 UTC. Jun 25 16:32:46.871502 extend-filesystems[1318]: Old size kept for /dev/sda9 Jun 25 16:32:46.875052 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:32:46.875862 extend-filesystems[1318]: Found sr0 Jun 25 16:32:46.877147 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:32:46.877246 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:32:46.890661 dbus-daemon[1316]: [system] SELinux support is enabled Jun 25 16:32:46.890844 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:32:46.892152 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:32:46.892181 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:32:46.892304 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:32:46.892318 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:32:46.895753 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:32:46.897207 update_engine[1328]: I0625 16:32:46.895797 1328 update_check_scheduler.cc:74] Next update check in 9m51s Jun 25 16:32:46.900832 unknown[1346]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jun 25 16:32:46.900841 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:32:46.904087 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 16:32:46.904203 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jun 25 16:32:46.904413 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:32:46.910029 unknown[1346]: Core dump limit set to -1 Jun 25 16:32:46.922600 kernel: NET: Registered PF_VSOCK protocol family Jun 25 16:32:46.927613 bash[1374]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:32:46.928996 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:32:46.929435 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:32:46.950438 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1154) Jun 25 16:32:46.959100 systemd-logind[1326]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:32:46.960279 systemd-logind[1326]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:32:46.960493 systemd-logind[1326]: New seat seat0. Jun 25 16:32:46.963937 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:32:47.066005 locksmithd[1385]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:32:47.152473 containerd[1344]: time="2024-06-25T16:32:47.152424389Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:32:47.192753 containerd[1344]: time="2024-06-25T16:32:47.192718142Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:32:47.193534 containerd[1344]: time="2024-06-25T16:32:47.193522544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:32:47.194435 containerd[1344]: time="2024-06-25T16:32:47.194416730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:32:47.194486 containerd[1344]: time="2024-06-25T16:32:47.194477855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:32:47.194691 containerd[1344]: time="2024-06-25T16:32:47.194679527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:32:47.194741 containerd[1344]: time="2024-06-25T16:32:47.194731381Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:32:47.194848 containerd[1344]: time="2024-06-25T16:32:47.194838603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:32:47.194936 containerd[1344]: time="2024-06-25T16:32:47.194926738Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:32:47.194980 containerd[1344]: time="2024-06-25T16:32:47.194969225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:32:47.195075 containerd[1344]: time="2024-06-25T16:32:47.195066364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:32:47.195278 containerd[1344]: time="2024-06-25T16:32:47.195269819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:32:47.195322 containerd[1344]: time="2024-06-25T16:32:47.195313402Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:32:47.195358 containerd[1344]: time="2024-06-25T16:32:47.195350908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:32:47.195487 containerd[1344]: time="2024-06-25T16:32:47.195476982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:32:47.195527 containerd[1344]: time="2024-06-25T16:32:47.195519503Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:32:47.195589 containerd[1344]: time="2024-06-25T16:32:47.195580175Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:32:47.195627 containerd[1344]: time="2024-06-25T16:32:47.195619957Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:32:47.212007 containerd[1344]: time="2024-06-25T16:32:47.211983572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:32:47.212106 containerd[1344]: time="2024-06-25T16:32:47.212097596Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:32:47.212147 containerd[1344]: time="2024-06-25T16:32:47.212139819Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:32:47.212204 containerd[1344]: time="2024-06-25T16:32:47.212196610Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:32:47.212244 containerd[1344]: time="2024-06-25T16:32:47.212237142Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:32:47.212300 containerd[1344]: time="2024-06-25T16:32:47.212291378Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:32:47.212337 containerd[1344]: time="2024-06-25T16:32:47.212330619Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:32:47.212459 containerd[1344]: time="2024-06-25T16:32:47.212449500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:32:47.212505 containerd[1344]: time="2024-06-25T16:32:47.212496030Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:32:47.212787 containerd[1344]: time="2024-06-25T16:32:47.212546269Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:32:47.212832 containerd[1344]: time="2024-06-25T16:32:47.212823930Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:32:47.212885 containerd[1344]: time="2024-06-25T16:32:47.212876878Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.212929 containerd[1344]: time="2024-06-25T16:32:47.212921651Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.212978 containerd[1344]: time="2024-06-25T16:32:47.212970475Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.213014 containerd[1344]: time="2024-06-25T16:32:47.213007683Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.213059 containerd[1344]: time="2024-06-25T16:32:47.213050269Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.213099 containerd[1344]: time="2024-06-25T16:32:47.213092421Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.213146 containerd[1344]: time="2024-06-25T16:32:47.213137581Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.213198 containerd[1344]: time="2024-06-25T16:32:47.213189587Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:32:47.213308 containerd[1344]: time="2024-06-25T16:32:47.213289602Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:32:47.213597 containerd[1344]: time="2024-06-25T16:32:47.213585507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:32:47.213649 containerd[1344]: time="2024-06-25T16:32:47.213641491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.213695 containerd[1344]: time="2024-06-25T16:32:47.213687578Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:32:47.213757 containerd[1344]: time="2024-06-25T16:32:47.213748483Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:32:47.213827 containerd[1344]: time="2024-06-25T16:32:47.213819274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.213868 containerd[1344]: time="2024-06-25T16:32:47.213861196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216469 containerd[1344]: time="2024-06-25T16:32:47.215414745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216539 containerd[1344]: time="2024-06-25T16:32:47.216529317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216591 containerd[1344]: time="2024-06-25T16:32:47.216577904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216633 containerd[1344]: time="2024-06-25T16:32:47.216625716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216669 containerd[1344]: time="2024-06-25T16:32:47.216662482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216741 containerd[1344]: time="2024-06-25T16:32:47.216712826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216793 containerd[1344]: time="2024-06-25T16:32:47.216784007Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:32:47.216909 containerd[1344]: time="2024-06-25T16:32:47.216899987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216958 containerd[1344]: time="2024-06-25T16:32:47.216950192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.216995 containerd[1344]: time="2024-06-25T16:32:47.216988236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.217038 containerd[1344]: time="2024-06-25T16:32:47.217031232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.217744 containerd[1344]: time="2024-06-25T16:32:47.217067703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.217791 containerd[1344]: time="2024-06-25T16:32:47.217782313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.217841 containerd[1344]: time="2024-06-25T16:32:47.217831851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.217880 containerd[1344]: time="2024-06-25T16:32:47.217872791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:32:47.218099 containerd[1344]: time="2024-06-25T16:32:47.218067308Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:32:47.218231 containerd[1344]: time="2024-06-25T16:32:47.218221661Z" level=info msg="Connect containerd service" Jun 25 16:32:47.218287 containerd[1344]: time="2024-06-25T16:32:47.218273944Z" level=info msg="using legacy CRI server" Jun 25 16:32:47.218327 containerd[1344]: time="2024-06-25T16:32:47.218319778Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:32:47.218384 containerd[1344]: time="2024-06-25T16:32:47.218375904Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:32:47.218840 containerd[1344]: time="2024-06-25T16:32:47.218827155Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:32:47.221048 containerd[1344]: time="2024-06-25T16:32:47.221029733Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:32:47.221079 containerd[1344]: time="2024-06-25T16:32:47.221048858Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:32:47.221079 containerd[1344]: time="2024-06-25T16:32:47.221057930Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:32:47.221079 containerd[1344]: time="2024-06-25T16:32:47.221065020Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:32:47.221375 containerd[1344]: time="2024-06-25T16:32:47.221362560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:32:47.221408 containerd[1344]: time="2024-06-25T16:32:47.221403138Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:32:47.221440 containerd[1344]: time="2024-06-25T16:32:47.221423625Z" level=info msg="Start subscribing containerd event" Jun 25 16:32:47.221466 containerd[1344]: time="2024-06-25T16:32:47.221456843Z" level=info msg="Start recovering state" Jun 25 16:32:47.221505 containerd[1344]: time="2024-06-25T16:32:47.221496724Z" level=info msg="Start event monitor" Jun 25 16:32:47.221526 containerd[1344]: time="2024-06-25T16:32:47.221510169Z" level=info msg="Start snapshots syncer" Jun 25 16:32:47.221526 containerd[1344]: time="2024-06-25T16:32:47.221515870Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:32:47.221526 containerd[1344]: time="2024-06-25T16:32:47.221520173Z" level=info msg="Start streaming server" Jun 25 16:32:47.222586 containerd[1344]: time="2024-06-25T16:32:47.221561717Z" level=info msg="containerd successfully booted in 0.072034s" Jun 25 16:32:47.221616 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:32:47.483495 tar[1336]: linux-amd64/LICENSE Jun 25 16:32:47.483616 tar[1336]: linux-amd64/README.md Jun 25 16:32:47.491269 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:32:47.798559 sshd_keygen[1362]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:32:47.811834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:32:47.821030 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:32:47.824076 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:32:47.824170 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:32:47.825338 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:32:47.835121 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:32:47.839957 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:32:47.840951 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:32:47.841160 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:32:48.374378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:48.374763 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:32:48.376462 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:32:48.382888 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:32:48.383008 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:32:48.383215 systemd[1]: Startup finished in 951ms (kernel) + 4.349s (initrd) + 4.859s (userspace) = 10.160s. Jun 25 16:32:48.408140 login[1459]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:32:48.409508 login[1460]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:32:48.414534 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:32:48.420965 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:32:48.423126 systemd-logind[1326]: New session 2 of user core. Jun 25 16:32:48.426185 systemd-logind[1326]: New session 1 of user core. Jun 25 16:32:48.429110 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:32:48.432943 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:32:48.437747 (systemd)[1466]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:48.507576 systemd[1466]: Queued start job for default target default.target. Jun 25 16:32:48.512014 systemd[1466]: Reached target paths.target - Paths. Jun 25 16:32:48.512033 systemd[1466]: Reached target sockets.target - Sockets. Jun 25 16:32:48.512045 systemd[1466]: Reached target timers.target - Timers. Jun 25 16:32:48.512056 systemd[1466]: Reached target basic.target - Basic System. Jun 25 16:32:48.512126 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:32:48.513153 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:32:48.513800 systemd[1466]: Reached target default.target - Main User Target. Jun 25 16:32:48.513828 systemd[1466]: Startup finished in 72ms. Jun 25 16:32:48.513894 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:32:49.038184 kubelet[1463]: E0625 16:32:49.038126 1463 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:32:49.039476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:32:49.039557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:32:59.290101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:32:59.290231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:59.299992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:32:59.460989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:32:59.521511 kubelet[1496]: E0625 16:32:59.521483 1496 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:32:59.523902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:32:59.523980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:33:09.774551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:33:09.774683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:09.781902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:10.047088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:10.090276 kubelet[1507]: E0625 16:33:10.090237 1507 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:33:10.091770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:33:10.091846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:33:20.305129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:33:20.305251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:20.314925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:20.647035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:20.701237 kubelet[1517]: E0625 16:33:20.701210 1517 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:33:20.702496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:33:20.702574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:33:26.981202 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:33:26.982314 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:33776.service - OpenSSH per-connection server daemon (139.178.68.195:33776). Jun 25 16:33:27.011862 sshd[1525]: Accepted publickey for core from 139.178.68.195 port 33776 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:33:27.012746 sshd[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.016617 systemd-logind[1326]: New session 3 of user core. Jun 25 16:33:27.021844 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:33:27.076382 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:33786.service - OpenSSH per-connection server daemon (139.178.68.195:33786). Jun 25 16:33:27.103740 sshd[1530]: Accepted publickey for core from 139.178.68.195 port 33786 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:33:27.104530 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.107110 systemd-logind[1326]: New session 4 of user core. Jun 25 16:33:27.113827 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:33:27.163419 sshd[1530]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:27.171149 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:33786.service: Deactivated successfully. Jun 25 16:33:27.171590 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:33:27.171962 systemd-logind[1326]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:33:27.172767 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:33800.service - OpenSSH per-connection server daemon (139.178.68.195:33800). Jun 25 16:33:27.173836 systemd-logind[1326]: Removed session 4. Jun 25 16:33:27.198284 sshd[1536]: Accepted publickey for core from 139.178.68.195 port 33800 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:33:27.198983 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.201627 systemd-logind[1326]: New session 5 of user core. Jun 25 16:33:27.208888 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:33:27.256053 sshd[1536]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:27.265425 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:33800.service: Deactivated successfully. Jun 25 16:33:27.265896 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:33:27.266529 systemd-logind[1326]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:33:27.267504 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:33816.service - OpenSSH per-connection server daemon (139.178.68.195:33816). Jun 25 16:33:27.268173 systemd-logind[1326]: Removed session 5. Jun 25 16:33:27.295580 sshd[1542]: Accepted publickey for core from 139.178.68.195 port 33816 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:33:27.296448 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.300112 systemd-logind[1326]: New session 6 of user core. Jun 25 16:33:27.307890 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:33:27.359808 sshd[1542]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:27.364291 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:33816.service: Deactivated successfully. Jun 25 16:33:27.364774 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:33:27.365796 systemd-logind[1326]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:33:27.366246 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:33824.service - OpenSSH per-connection server daemon (139.178.68.195:33824). Jun 25 16:33:27.366991 systemd-logind[1326]: Removed session 6. Jun 25 16:33:27.392898 sshd[1548]: Accepted publickey for core from 139.178.68.195 port 33824 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:33:27.393802 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.396182 systemd-logind[1326]: New session 7 of user core. Jun 25 16:33:27.402807 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:33:27.457700 sudo[1551]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:33:27.457896 sudo[1551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:33:27.470959 sudo[1551]: pam_unix(sudo:session): session closed for user root Jun 25 16:33:27.472531 sshd[1548]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:27.477008 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:33824.service: Deactivated successfully. Jun 25 16:33:27.477346 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:33:27.477673 systemd-logind[1326]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:33:27.478429 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:33834.service - OpenSSH per-connection server daemon (139.178.68.195:33834). Jun 25 16:33:27.478930 systemd-logind[1326]: Removed session 7. Jun 25 16:33:27.503431 sshd[1555]: Accepted publickey for core from 139.178.68.195 port 33834 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:33:27.504248 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.506754 systemd-logind[1326]: New session 8 of user core. Jun 25 16:33:27.513867 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:33:27.562857 sudo[1559]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:33:27.563223 sudo[1559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:33:27.565360 sudo[1559]: pam_unix(sudo:session): session closed for user root Jun 25 16:33:27.569166 sudo[1558]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:33:27.569345 sudo[1558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:33:27.580947 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:33:27.581000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:33:27.583596 kernel: kauditd_printk_skb: 112 callbacks suppressed Jun 25 16:33:27.583631 kernel: audit: type=1305 audit(1719333207.581:191): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:33:27.583647 kernel: audit: type=1300 audit(1719333207.581:191): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9ebaa510 a2=420 a3=0 items=0 ppid=1 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:27.581000 audit[1562]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9ebaa510 a2=420 a3=0 items=0 ppid=1 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:27.586300 auditctl[1562]: No rules Jun 25 16:33:27.581000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:33:27.589122 kernel: audit: type=1327 audit(1719333207.581:191): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:33:27.589169 kernel: audit: type=1131 audit(1719333207.586:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.586612 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:33:27.586742 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:33:27.587988 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:33:27.603579 augenrules[1579]: No rules Jun 25 16:33:27.604115 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:33:27.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.604924 sudo[1558]: pam_unix(sudo:session): session closed for user root Jun 25 16:33:27.604000 audit[1558]: USER_END pid=1558 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.608354 kernel: audit: type=1130 audit(1719333207.603:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.608379 kernel: audit: type=1106 audit(1719333207.604:194): pid=1558 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.608397 kernel: audit: type=1104 audit(1719333207.604:195): pid=1558 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.604000 audit[1558]: CRED_DISP pid=1558 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.608518 sshd[1555]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:27.608000 audit[1555]: USER_END pid=1555 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.612092 kernel: audit: type=1106 audit(1719333207.608:196): pid=1555 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.612117 kernel: audit: type=1104 audit(1719333207.608:197): pid=1555 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.608000 audit[1555]: CRED_DISP pid=1555 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.616062 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:33834.service: Deactivated successfully. Jun 25 16:33:27.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.105:22-139.178.68.195:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.616379 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:33:27.617484 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:33840.service - OpenSSH per-connection server daemon (139.178.68.195:33840). Jun 25 16:33:27.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:33840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.618236 systemd-logind[1326]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:33:27.618759 kernel: audit: type=1131 audit(1719333207.615:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.105:22-139.178.68.195:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.618991 systemd-logind[1326]: Removed session 8. Jun 25 16:33:27.641000 audit[1585]: USER_ACCT pid=1585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.642208 sshd[1585]: Accepted publickey for core from 139.178.68.195 port 33840 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:33:27.642000 audit[1585]: CRED_ACQ pid=1585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.642000 audit[1585]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0566de70 a2=3 a3=7f8590e4c480 items=0 ppid=1 pid=1585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:27.642000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:27.643284 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.646325 systemd-logind[1326]: New session 9 of user core. Jun 25 16:33:27.654826 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:33:27.657000 audit[1585]: USER_START pid=1585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.658000 audit[1587]: CRED_ACQ pid=1587 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:27.704000 audit[1588]: USER_ACCT pid=1588 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.704000 audit[1588]: CRED_REFR pid=1588 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.705169 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:33:27.705575 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:33:27.706000 audit[1588]: USER_START pid=1588 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:27.791923 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:33:28.000104 dockerd[1597]: time="2024-06-25T16:33:28.000070925Z" level=info msg="Starting up" Jun 25 16:33:28.009875 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2789456863-merged.mount: Deactivated successfully. Jun 25 16:33:28.017544 systemd[1]: var-lib-docker-metacopy\x2dcheck641880014-merged.mount: Deactivated successfully. Jun 25 16:33:28.030698 dockerd[1597]: time="2024-06-25T16:33:28.030675392Z" level=info msg="Loading containers: start." Jun 25 16:33:28.072000 audit[1629]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.072000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc09d5e270 a2=0 a3=7fa5dea2ee90 items=0 ppid=1597 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.072000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:33:28.073000 audit[1631]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.073000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc5d257070 a2=0 a3=7f688d38ee90 items=0 ppid=1597 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.073000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:33:28.074000 audit[1633]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.074000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc8c361b50 a2=0 a3=7f7560339e90 items=0 ppid=1597 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.074000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:33:28.075000 audit[1635]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.075000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe9ddaa7d0 a2=0 a3=7fc715191e90 items=0 ppid=1597 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.075000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:33:28.077000 audit[1637]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.077000 audit[1637]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed8337ab0 a2=0 a3=7f42d23bbe90 items=0 ppid=1597 pid=1637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.077000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:33:28.079000 audit[1639]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.079000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe64eb39c0 a2=0 a3=7f80b0bfde90 items=0 ppid=1597 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.079000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:33:28.082000 audit[1641]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.082000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe049b60e0 a2=0 a3=7f6eee8afe90 items=0 ppid=1597 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.082000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:33:28.084000 audit[1643]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.084000 audit[1643]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcbb62a980 a2=0 a3=7f06b2c91e90 items=0 ppid=1597 pid=1643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.084000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:33:28.085000 audit[1645]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.085000 audit[1645]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff47ca1260 a2=0 a3=7f09ee59de90 items=0 ppid=1597 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.085000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:33:28.089000 audit[1649]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.089000 audit[1649]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd8e717a00 a2=0 a3=7fa2a5184e90 items=0 ppid=1597 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.089000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:33:28.089000 audit[1650]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.089000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe52b55b10 a2=0 a3=7fce2c3e8e90 items=0 ppid=1597 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.089000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:33:28.095737 kernel: Initializing XFRM netlink socket Jun 25 16:33:28.118000 audit[1658]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1658 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.118000 audit[1658]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe089363b0 a2=0 a3=7f5785123e90 items=0 ppid=1597 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.118000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:33:28.126000 audit[1661]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.126000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc359630e0 a2=0 a3=7f749b7a8e90 items=0 ppid=1597 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.126000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:33:28.129000 audit[1665]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.129000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdb28558f0 a2=0 a3=7f5bf8618e90 items=0 ppid=1597 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.129000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:33:28.130000 audit[1667]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.130000 audit[1667]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffee61fe440 a2=0 a3=7fda73189e90 items=0 ppid=1597 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.130000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:33:28.131000 audit[1669]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.131000 audit[1669]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc92951490 a2=0 a3=7f72e47b9e90 items=0 ppid=1597 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.131000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:33:28.133000 audit[1671]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.133000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc1c0928e0 a2=0 a3=7fbc49c2fe90 items=0 ppid=1597 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.133000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:33:28.134000 audit[1673]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.134000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc78c9d910 a2=0 a3=7f9ac3364e90 items=0 ppid=1597 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.134000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:33:28.138000 audit[1676]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.138000 audit[1676]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe759cd180 a2=0 a3=7fdfbfa19e90 items=0 ppid=1597 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.138000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:33:28.139000 audit[1678]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.139000 audit[1678]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe2fe6a7b0 a2=0 a3=7f941615be90 items=0 ppid=1597 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.139000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:33:28.141000 audit[1680]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.141000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffa2accb30 a2=0 a3=7f27fa3cde90 items=0 ppid=1597 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.141000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:33:28.142000 audit[1682]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.142000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf2fcc990 a2=0 a3=7f1f1e822e90 items=0 ppid=1597 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.142000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:33:28.143331 systemd-networkd[1156]: docker0: Link UP Jun 25 16:33:28.146000 audit[1686]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.146000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc605ba5a0 a2=0 a3=7f54ce69de90 items=0 ppid=1597 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.146000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:33:28.147000 audit[1687]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:28.147000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe48e85510 a2=0 a3=7f60ff985e90 items=0 ppid=1597 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:28.147000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:33:28.148440 dockerd[1597]: time="2024-06-25T16:33:28.148425999Z" level=info msg="Loading containers: done." Jun 25 16:33:28.201131 dockerd[1597]: time="2024-06-25T16:33:28.201103852Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:33:28.201243 dockerd[1597]: time="2024-06-25T16:33:28.201228330Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:33:28.201298 dockerd[1597]: time="2024-06-25T16:33:28.201286318Z" level=info msg="Daemon has completed initialization" Jun 25 16:33:28.212432 dockerd[1597]: time="2024-06-25T16:33:28.212401126Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:33:28.215758 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:33:28.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:28.868930 containerd[1344]: time="2024-06-25T16:33:28.868903419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:33:29.008122 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck398172467-merged.mount: Deactivated successfully. Jun 25 16:33:29.507460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622802943.mount: Deactivated successfully. Jun 25 16:33:30.711777 containerd[1344]: time="2024-06-25T16:33:30.711744456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:30.712327 containerd[1344]: time="2024-06-25T16:33:30.712304482Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 16:33:30.712779 containerd[1344]: time="2024-06-25T16:33:30.712766475Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:30.714108 containerd[1344]: time="2024-06-25T16:33:30.714094797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:30.714984 containerd[1344]: time="2024-06-25T16:33:30.714968052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:30.715683 containerd[1344]: time="2024-06-25T16:33:30.715668358Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 1.8467389s" Jun 25 16:33:30.715755 containerd[1344]: time="2024-06-25T16:33:30.715744833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:33:30.728226 containerd[1344]: time="2024-06-25T16:33:30.728205783Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:33:30.805171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 16:33:30.805334 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:30.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:30.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:30.819927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:30.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:30.881425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:30.964144 kubelet[1791]: E0625 16:33:30.963824 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:33:30.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:33:30.965306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:33:30.965383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:33:32.226776 update_engine[1328]: I0625 16:33:32.226748 1328 update_attempter.cc:509] Updating boot flags... Jun 25 16:33:32.262744 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1810) Jun 25 16:33:32.302738 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1813) Jun 25 16:33:32.897475 containerd[1344]: time="2024-06-25T16:33:32.897449910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:32.904671 containerd[1344]: time="2024-06-25T16:33:32.904648701Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 16:33:32.912242 containerd[1344]: time="2024-06-25T16:33:32.912223519Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:32.919956 containerd[1344]: time="2024-06-25T16:33:32.919938936Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:32.924623 containerd[1344]: time="2024-06-25T16:33:32.924607218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:32.925136 containerd[1344]: time="2024-06-25T16:33:32.925120856Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.196786759s" Jun 25 16:33:32.925193 containerd[1344]: time="2024-06-25T16:33:32.925182449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:33:32.937954 containerd[1344]: time="2024-06-25T16:33:32.937928605Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:33:34.045663 containerd[1344]: time="2024-06-25T16:33:34.045637185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:34.046109 containerd[1344]: time="2024-06-25T16:33:34.046080427Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 16:33:34.046636 containerd[1344]: time="2024-06-25T16:33:34.046618922Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:34.047897 containerd[1344]: time="2024-06-25T16:33:34.047875873Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:34.049465 containerd[1344]: time="2024-06-25T16:33:34.049439855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:34.051899 containerd[1344]: time="2024-06-25T16:33:34.051865473Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.113904095s" Jun 25 16:33:34.051899 containerd[1344]: time="2024-06-25T16:33:34.051898964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:33:34.066461 containerd[1344]: time="2024-06-25T16:33:34.066411419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:33:35.354925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187726946.mount: Deactivated successfully. Jun 25 16:33:35.654377 containerd[1344]: time="2024-06-25T16:33:35.654319859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:35.654789 containerd[1344]: time="2024-06-25T16:33:35.654761885Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 16:33:35.655097 containerd[1344]: time="2024-06-25T16:33:35.655084419Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:35.655882 containerd[1344]: time="2024-06-25T16:33:35.655857875Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:35.656557 containerd[1344]: time="2024-06-25T16:33:35.656544877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:35.656979 containerd[1344]: time="2024-06-25T16:33:35.656961068Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.590502786s" Jun 25 16:33:35.657015 containerd[1344]: time="2024-06-25T16:33:35.656980065Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:33:35.670367 containerd[1344]: time="2024-06-25T16:33:35.670329522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:33:36.157091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596044002.mount: Deactivated successfully. Jun 25 16:33:36.159408 containerd[1344]: time="2024-06-25T16:33:36.159389535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:36.159817 containerd[1344]: time="2024-06-25T16:33:36.159796232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:33:36.159993 containerd[1344]: time="2024-06-25T16:33:36.159981977Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:36.160921 containerd[1344]: time="2024-06-25T16:33:36.160909600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:36.161787 containerd[1344]: time="2024-06-25T16:33:36.161776075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:36.162241 containerd[1344]: time="2024-06-25T16:33:36.162223553Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 491.754563ms" Jun 25 16:33:36.162272 containerd[1344]: time="2024-06-25T16:33:36.162243562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:33:36.174227 containerd[1344]: time="2024-06-25T16:33:36.174196609Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:33:36.683895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719681409.mount: Deactivated successfully. Jun 25 16:33:38.503554 containerd[1344]: time="2024-06-25T16:33:38.503481051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:38.505491 containerd[1344]: time="2024-06-25T16:33:38.505446351Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:33:38.511304 containerd[1344]: time="2024-06-25T16:33:38.511281590Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:38.520082 containerd[1344]: time="2024-06-25T16:33:38.520060711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:38.529035 containerd[1344]: time="2024-06-25T16:33:38.529011147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:38.529921 containerd[1344]: time="2024-06-25T16:33:38.529896958Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.355557963s" Jun 25 16:33:38.530004 containerd[1344]: time="2024-06-25T16:33:38.529988075Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:33:38.546811 containerd[1344]: time="2024-06-25T16:33:38.546782014Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:33:39.173025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892434306.mount: Deactivated successfully. Jun 25 16:33:39.633713 containerd[1344]: time="2024-06-25T16:33:39.633668930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:39.634640 containerd[1344]: time="2024-06-25T16:33:39.634614721Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 16:33:39.635062 containerd[1344]: time="2024-06-25T16:33:39.635049436Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:39.636025 containerd[1344]: time="2024-06-25T16:33:39.636009896Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:39.636754 containerd[1344]: time="2024-06-25T16:33:39.636736321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:33:39.637242 containerd[1344]: time="2024-06-25T16:33:39.637219015Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.090387813s" Jun 25 16:33:39.637281 containerd[1344]: time="2024-06-25T16:33:39.637244266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:33:41.055136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 16:33:41.055270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:41.056524 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:33:41.056573 kernel: audit: type=1130 audit(1719333221.054:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.059033 kernel: audit: type=1131 audit(1719333221.054:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.064911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:41.260195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:41.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.262738 kernel: audit: type=1130 audit(1719333221.259:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.324584 kubelet[1972]: E0625 16:33:41.324509 1972 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:33:41.325567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:33:41.325649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:33:41.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:33:41.327737 kernel: audit: type=1131 audit(1719333221.325:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:33:41.678936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:41.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.682668 kernel: audit: type=1130 audit(1719333221.678:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.682700 kernel: audit: type=1131 audit(1719333221.678:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.683155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:41.705753 systemd[1]: Reloading. Jun 25 16:33:41.812342 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:33:41.825323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:33:41.864000 audit: BPF prog-id=38 op=LOAD Jun 25 16:33:41.864000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:33:41.864000 audit: BPF prog-id=39 op=LOAD Jun 25 16:33:41.866855 kernel: audit: type=1334 audit(1719333221.864:243): prog-id=38 op=LOAD Jun 25 16:33:41.866884 kernel: audit: type=1334 audit(1719333221.864:244): prog-id=24 op=UNLOAD Jun 25 16:33:41.866900 kernel: audit: type=1334 audit(1719333221.864:245): prog-id=39 op=LOAD Jun 25 16:33:41.866913 kernel: audit: type=1334 audit(1719333221.864:246): prog-id=40 op=LOAD Jun 25 16:33:41.864000 audit: BPF prog-id=40 op=LOAD Jun 25 16:33:41.864000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:33:41.864000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:33:41.865000 audit: BPF prog-id=41 op=LOAD Jun 25 16:33:41.866000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:33:41.866000 audit: BPF prog-id=42 op=LOAD Jun 25 16:33:41.866000 audit: BPF prog-id=43 op=LOAD Jun 25 16:33:41.866000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:33:41.866000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:33:41.867000 audit: BPF prog-id=44 op=LOAD Jun 25 16:33:41.867000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:33:41.867000 audit: BPF prog-id=45 op=LOAD Jun 25 16:33:41.867000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:33:41.867000 audit: BPF prog-id=46 op=LOAD Jun 25 16:33:41.867000 audit: BPF prog-id=47 op=LOAD Jun 25 16:33:41.867000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:33:41.867000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:33:41.868000 audit: BPF prog-id=48 op=LOAD Jun 25 16:33:41.868000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:33:41.869000 audit: BPF prog-id=49 op=LOAD Jun 25 16:33:41.870000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:33:41.870000 audit: BPF prog-id=50 op=LOAD Jun 25 16:33:41.870000 audit: BPF prog-id=51 op=LOAD Jun 25 16:33:41.870000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:33:41.870000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:33:41.885241 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:33:41.885383 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:33:41.885555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:41.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:33:41.886851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:42.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:42.292807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:42.343009 kubelet[2064]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:33:42.343259 kubelet[2064]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:33:42.343295 kubelet[2064]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:33:42.343372 kubelet[2064]: I0625 16:33:42.343351 2064 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:33:42.582963 kubelet[2064]: I0625 16:33:42.582940 2064 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:33:42.582963 kubelet[2064]: I0625 16:33:42.582959 2064 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:33:42.583097 kubelet[2064]: I0625 16:33:42.583087 2064 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:33:42.605255 kubelet[2064]: E0625 16:33:42.605230 2064 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.605400 kubelet[2064]: I0625 16:33:42.605391 2064 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:33:42.665883 kubelet[2064]: I0625 16:33:42.665866 2064 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:33:42.666630 kubelet[2064]: I0625 16:33:42.666621 2064 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:33:42.666798 kubelet[2064]: I0625 16:33:42.666789 2064 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:33:42.667218 kubelet[2064]: I0625 16:33:42.667210 2064 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:33:42.667263 kubelet[2064]: I0625 16:33:42.667258 2064 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:33:42.667919 kubelet[2064]: I0625 16:33:42.667910 2064 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:33:42.669066 kubelet[2064]: I0625 16:33:42.669057 2064 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:33:42.669115 kubelet[2064]: I0625 16:33:42.669110 2064 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:33:42.669169 kubelet[2064]: I0625 16:33:42.669163 2064 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:33:42.669208 kubelet[2064]: I0625 16:33:42.669204 2064 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:33:42.673164 kubelet[2064]: W0625 16:33:42.673080 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.673164 kubelet[2064]: E0625 16:33:42.673157 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.673252 kubelet[2064]: I0625 16:33:42.673238 2064 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:33:42.675292 kubelet[2064]: W0625 16:33:42.675269 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.675363 kubelet[2064]: E0625 16:33:42.675357 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.675632 kubelet[2064]: W0625 16:33:42.675621 2064 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:33:42.676095 kubelet[2064]: I0625 16:33:42.676083 2064 server.go:1232] "Started kubelet" Jun 25 16:33:42.676165 kubelet[2064]: I0625 16:33:42.676154 2064 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:33:42.677571 kubelet[2064]: I0625 16:33:42.677557 2064 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:33:42.678830 kubelet[2064]: I0625 16:33:42.678815 2064 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:33:42.678908 kubelet[2064]: I0625 16:33:42.678901 2064 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:33:42.679044 kubelet[2064]: I0625 16:33:42.679037 2064 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:33:42.679952 kubelet[2064]: E0625 16:33:42.679895 2064 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4c74e4226166", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 33, 42, 676062566, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 33, 42, 676062566, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://139.178.70.105:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.105:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:33:42.680103 kubelet[2064]: E0625 16:33:42.680095 2064 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:33:42.680162 kubelet[2064]: E0625 16:33:42.680156 2064 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:33:42.681000 audit[2075]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.681000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd515faff0 a2=0 a3=7f501233ee90 items=0 ppid=2064 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:33:42.681000 audit[2076]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.681000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1154cbc0 a2=0 a3=7f2206439e90 items=0 ppid=2064 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:33:42.683426 kubelet[2064]: I0625 16:33:42.683412 2064 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:33:42.684267 kubelet[2064]: E0625 16:33:42.684257 2064 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Jun 25 16:33:42.684967 kubelet[2064]: I0625 16:33:42.684954 2064 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:33:42.684000 audit[2078]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.684000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc525cdfe0 a2=0 a3=7f58320ebe90 items=0 ppid=2064 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:33:42.685335 kubelet[2064]: W0625 16:33:42.685199 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.685383 kubelet[2064]: E0625 16:33:42.685376 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.686023 kubelet[2064]: I0625 16:33:42.686010 2064 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:33:42.685000 audit[2080]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.685000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffec9e0c490 a2=0 a3=7f7284095e90 items=0 ppid=2064 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:33:42.701000 audit[2084]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.701000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc1ae55c80 a2=0 a3=7f065aa14e90 items=0 ppid=2064 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:33:42.702957 kubelet[2064]: I0625 16:33:42.702947 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:33:42.702000 audit[2085]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:33:42.702000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8f561180 a2=0 a3=7f7888a3ae90 items=0 ppid=2064 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:33:42.703761 kubelet[2064]: I0625 16:33:42.703754 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:33:42.703821 kubelet[2064]: I0625 16:33:42.703815 2064 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:33:42.703864 kubelet[2064]: I0625 16:33:42.703859 2064 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:33:42.703937 kubelet[2064]: E0625 16:33:42.703930 2064 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:33:42.703000 audit[2086]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.703000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7dcde220 a2=0 a3=7f7b82688e90 items=0 ppid=2064 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:33:42.704000 audit[2087]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.704000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6e8d52a0 a2=0 a3=7fc33eb3be90 items=0 ppid=2064 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:33:42.705000 audit[2088]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:33:42.705000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe76622070 a2=0 a3=7fb3b3d2ee90 items=0 ppid=2064 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:33:42.705000 audit[2089]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:33:42.705000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0e40bb40 a2=0 a3=7fafd8646e90 items=0 ppid=2064 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:33:42.705000 audit[2090]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:33:42.705000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffde52a2310 a2=0 a3=7f5f674eae90 items=0 ppid=2064 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:33:42.706922 kubelet[2064]: W0625 16:33:42.706909 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.706985 kubelet[2064]: E0625 16:33:42.706978 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:42.707084 kubelet[2064]: I0625 16:33:42.707074 2064 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:33:42.707127 kubelet[2064]: I0625 16:33:42.707122 2064 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:33:42.707173 kubelet[2064]: I0625 16:33:42.707168 2064 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:33:42.706000 audit[2091]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:33:42.706000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff5d9061b0 a2=0 a3=7f8aaded9e90 items=0 ppid=2064 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:42.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:33:42.708314 kubelet[2064]: I0625 16:33:42.708306 2064 policy_none.go:49] "None policy: Start" Jun 25 16:33:42.708736 kubelet[2064]: I0625 16:33:42.708722 2064 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:33:42.708801 kubelet[2064]: I0625 16:33:42.708795 2064 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:33:42.713515 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:33:42.727312 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:33:42.729094 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:33:42.735155 kubelet[2064]: I0625 16:33:42.735138 2064 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:33:42.735311 kubelet[2064]: I0625 16:33:42.735296 2064 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:33:42.736820 kubelet[2064]: E0625 16:33:42.736792 2064 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:33:42.785331 kubelet[2064]: I0625 16:33:42.785317 2064 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:33:42.785790 kubelet[2064]: E0625 16:33:42.785776 2064 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:33:42.805100 kubelet[2064]: I0625 16:33:42.805073 2064 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:33:42.805978 kubelet[2064]: I0625 16:33:42.805968 2064 topology_manager.go:215] "Topology Admit Handler" podUID="72ebbc77bff49ce7ef33389837179585" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:33:42.806594 kubelet[2064]: I0625 16:33:42.806586 2064 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:33:42.809826 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 16:33:42.818408 systemd[1]: Created slice kubepods-burstable-pod72ebbc77bff49ce7ef33389837179585.slice - libcontainer container kubepods-burstable-pod72ebbc77bff49ce7ef33389837179585.slice. Jun 25 16:33:42.820959 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 16:33:42.886451 kubelet[2064]: E0625 16:33:42.885306 2064 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Jun 25 16:33:42.887542 kubelet[2064]: I0625 16:33:42.887524 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72ebbc77bff49ce7ef33389837179585-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"72ebbc77bff49ce7ef33389837179585\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:33:42.887630 kubelet[2064]: I0625 16:33:42.887624 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72ebbc77bff49ce7ef33389837179585-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"72ebbc77bff49ce7ef33389837179585\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:33:42.887689 kubelet[2064]: I0625 16:33:42.887679 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:42.887757 kubelet[2064]: I0625 16:33:42.887751 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:42.887814 kubelet[2064]: I0625 16:33:42.887808 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:33:42.887862 kubelet[2064]: I0625 16:33:42.887857 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72ebbc77bff49ce7ef33389837179585-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"72ebbc77bff49ce7ef33389837179585\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:33:42.887911 kubelet[2064]: I0625 16:33:42.887905 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:42.887992 kubelet[2064]: I0625 16:33:42.887986 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:42.888036 kubelet[2064]: I0625 16:33:42.888031 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:42.987443 kubelet[2064]: I0625 16:33:42.987425 2064 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:33:42.987688 kubelet[2064]: E0625 16:33:42.987677 2064 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:33:43.120178 containerd[1344]: time="2024-06-25T16:33:43.120149469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 16:33:43.127346 containerd[1344]: time="2024-06-25T16:33:43.126639088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 16:33:43.127974 containerd[1344]: time="2024-06-25T16:33:43.127932024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:72ebbc77bff49ce7ef33389837179585,Namespace:kube-system,Attempt:0,}" Jun 25 16:33:43.285618 kubelet[2064]: E0625 16:33:43.285539 2064 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Jun 25 16:33:43.388706 kubelet[2064]: I0625 16:33:43.388542 2064 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:33:43.388912 kubelet[2064]: E0625 16:33:43.388715 2064 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:33:43.506601 kubelet[2064]: W0625 16:33:43.506534 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:43.506601 kubelet[2064]: E0625 16:33:43.506593 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:43.652794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952408658.mount: Deactivated successfully. Jun 25 16:33:43.655434 containerd[1344]: time="2024-06-25T16:33:43.655407808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.656230 containerd[1344]: time="2024-06-25T16:33:43.656205487Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:33:43.656690 containerd[1344]: time="2024-06-25T16:33:43.656677297Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.658086 containerd[1344]: time="2024-06-25T16:33:43.658068683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:33:43.659162 containerd[1344]: time="2024-06-25T16:33:43.659143237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:33:43.661899 containerd[1344]: time="2024-06-25T16:33:43.661867519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.665624 containerd[1344]: time="2024-06-25T16:33:43.665600622Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.667065 containerd[1344]: time="2024-06-25T16:33:43.667043445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.325884ms" Jun 25 16:33:43.669915 containerd[1344]: time="2024-06-25T16:33:43.669883709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.641483ms" Jun 25 16:33:43.670273 containerd[1344]: time="2024-06-25T16:33:43.670256393Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.670866 containerd[1344]: time="2024-06-25T16:33:43.670845555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.848808ms" Jun 25 16:33:43.671211 containerd[1344]: time="2024-06-25T16:33:43.671197900Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.671777 containerd[1344]: time="2024-06-25T16:33:43.671765633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.672296 containerd[1344]: time="2024-06-25T16:33:43.672285271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.672803 containerd[1344]: time="2024-06-25T16:33:43.672791480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.673870 containerd[1344]: time="2024-06-25T16:33:43.673790973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.676485 containerd[1344]: time="2024-06-25T16:33:43.676467818Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.677096 containerd[1344]: time="2024-06-25T16:33:43.677083042Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:33:43.796582 kubelet[2064]: W0625 16:33:43.796539 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:43.796582 kubelet[2064]: E0625 16:33:43.796580 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:43.800909 kubelet[2064]: W0625 16:33:43.800878 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:43.800909 kubelet[2064]: E0625 16:33:43.800897 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:43.917019 containerd[1344]: time="2024-06-25T16:33:43.916920607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:43.917159 containerd[1344]: time="2024-06-25T16:33:43.917143642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:43.917244 containerd[1344]: time="2024-06-25T16:33:43.917231471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:43.917332 containerd[1344]: time="2024-06-25T16:33:43.917319474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:43.920979 containerd[1344]: time="2024-06-25T16:33:43.919277678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:43.920979 containerd[1344]: time="2024-06-25T16:33:43.919314099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:43.920979 containerd[1344]: time="2024-06-25T16:33:43.919326731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:43.920979 containerd[1344]: time="2024-06-25T16:33:43.919335350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:43.921272 containerd[1344]: time="2024-06-25T16:33:43.921243424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:33:43.921331 containerd[1344]: time="2024-06-25T16:33:43.921318995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:43.921394 containerd[1344]: time="2024-06-25T16:33:43.921372300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:33:43.921447 containerd[1344]: time="2024-06-25T16:33:43.921436568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:33:43.931862 systemd[1]: Started cri-containerd-a61a699187054d1777ea550086a8ee2f8eec513097784513fa17d2a563f0b037.scope - libcontainer container a61a699187054d1777ea550086a8ee2f8eec513097784513fa17d2a563f0b037. Jun 25 16:33:43.937000 audit: BPF prog-id=52 op=LOAD Jun 25 16:33:43.938000 audit: BPF prog-id=53 op=LOAD Jun 25 16:33:43.938000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2124 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136316136393931383730353464313737376561353530303836613865 Jun 25 16:33:43.939000 audit: BPF prog-id=54 op=LOAD Jun 25 16:33:43.939000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2124 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136316136393931383730353464313737376561353530303836613865 Jun 25 16:33:43.939000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:33:43.939000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:33:43.939000 audit: BPF prog-id=55 op=LOAD Jun 25 16:33:43.939000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2124 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136316136393931383730353464313737376561353530303836613865 Jun 25 16:33:43.943863 systemd[1]: Started cri-containerd-92f4920479422593ab8d52804dc3e66ac4e484e1424b79f23d49e0fb39513f42.scope - libcontainer container 92f4920479422593ab8d52804dc3e66ac4e484e1424b79f23d49e0fb39513f42. Jun 25 16:33:43.946065 systemd[1]: Started cri-containerd-bd81c4e32fdd6189abaa37d9f3ddadfdfdfef11a1696ca84cb474b2e4c49706b.scope - libcontainer container bd81c4e32fdd6189abaa37d9f3ddadfdfdfef11a1696ca84cb474b2e4c49706b. Jun 25 16:33:43.953000 audit: BPF prog-id=56 op=LOAD Jun 25 16:33:43.953000 audit: BPF prog-id=57 op=LOAD Jun 25 16:33:43.953000 audit[2153]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2122 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932663439323034373934323235393361623864353238303464633365 Jun 25 16:33:43.953000 audit: BPF prog-id=58 op=LOAD Jun 25 16:33:43.953000 audit[2153]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2122 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932663439323034373934323235393361623864353238303464633365 Jun 25 16:33:43.953000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:33:43.953000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:33:43.953000 audit: BPF prog-id=59 op=LOAD Jun 25 16:33:43.953000 audit[2153]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2122 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932663439323034373934323235393361623864353238303464633365 Jun 25 16:33:43.958000 audit: BPF prog-id=60 op=LOAD Jun 25 16:33:43.959000 audit: BPF prog-id=61 op=LOAD Jun 25 16:33:43.959000 audit[2154]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2134 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264383163346533326664643631383961626161333764396633646461 Jun 25 16:33:43.959000 audit: BPF prog-id=62 op=LOAD Jun 25 16:33:43.959000 audit[2154]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2134 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264383163346533326664643631383961626161333764396633646461 Jun 25 16:33:43.959000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:33:43.959000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:33:43.959000 audit: BPF prog-id=63 op=LOAD Jun 25 16:33:43.959000 audit[2154]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2134 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:43.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264383163346533326664643631383961626161333764396633646461 Jun 25 16:33:43.992287 containerd[1344]: time="2024-06-25T16:33:43.992261294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:72ebbc77bff49ce7ef33389837179585,Namespace:kube-system,Attempt:0,} returns sandbox id \"a61a699187054d1777ea550086a8ee2f8eec513097784513fa17d2a563f0b037\"" Jun 25 16:33:43.993136 containerd[1344]: time="2024-06-25T16:33:43.992929734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"92f4920479422593ab8d52804dc3e66ac4e484e1424b79f23d49e0fb39513f42\"" Jun 25 16:33:43.995510 containerd[1344]: time="2024-06-25T16:33:43.995491315Z" level=info msg="CreateContainer within sandbox \"a61a699187054d1777ea550086a8ee2f8eec513097784513fa17d2a563f0b037\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:33:43.995631 containerd[1344]: time="2024-06-25T16:33:43.995615180Z" level=info msg="CreateContainer within sandbox \"92f4920479422593ab8d52804dc3e66ac4e484e1424b79f23d49e0fb39513f42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:33:44.002188 containerd[1344]: time="2024-06-25T16:33:44.002157464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd81c4e32fdd6189abaa37d9f3ddadfdfdfef11a1696ca84cb474b2e4c49706b\"" Jun 25 16:33:44.006977 containerd[1344]: time="2024-06-25T16:33:44.006946745Z" level=info msg="CreateContainer within sandbox \"bd81c4e32fdd6189abaa37d9f3ddadfdfdfef11a1696ca84cb474b2e4c49706b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:33:44.007565 containerd[1344]: time="2024-06-25T16:33:44.007541882Z" level=info msg="CreateContainer within sandbox \"a61a699187054d1777ea550086a8ee2f8eec513097784513fa17d2a563f0b037\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b653b7d58540475f3af9108daeb5bb1917e7a2d33d59ef9a80ffc5b7ad99bc7\"" Jun 25 16:33:44.010280 containerd[1344]: time="2024-06-25T16:33:44.010059824Z" level=info msg="StartContainer for \"8b653b7d58540475f3af9108daeb5bb1917e7a2d33d59ef9a80ffc5b7ad99bc7\"" Jun 25 16:33:44.010704 containerd[1344]: time="2024-06-25T16:33:44.010690368Z" level=info msg="CreateContainer within sandbox \"92f4920479422593ab8d52804dc3e66ac4e484e1424b79f23d49e0fb39513f42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42c66983982cd6ab19750b13c9eb34e792031526428425a749dadcfc91ec1309\"" Jun 25 16:33:44.011580 containerd[1344]: time="2024-06-25T16:33:44.011559489Z" level=info msg="StartContainer for \"42c66983982cd6ab19750b13c9eb34e792031526428425a749dadcfc91ec1309\"" Jun 25 16:33:44.013220 containerd[1344]: time="2024-06-25T16:33:44.013198989Z" level=info msg="CreateContainer within sandbox \"bd81c4e32fdd6189abaa37d9f3ddadfdfdfef11a1696ca84cb474b2e4c49706b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3728828cc7da4948189d42af40215c91a8dbcf3780584248e288dc98da03d347\"" Jun 25 16:33:44.013505 containerd[1344]: time="2024-06-25T16:33:44.013492997Z" level=info msg="StartContainer for \"3728828cc7da4948189d42af40215c91a8dbcf3780584248e288dc98da03d347\"" Jun 25 16:33:44.032883 systemd[1]: Started cri-containerd-8b653b7d58540475f3af9108daeb5bb1917e7a2d33d59ef9a80ffc5b7ad99bc7.scope - libcontainer container 8b653b7d58540475f3af9108daeb5bb1917e7a2d33d59ef9a80ffc5b7ad99bc7. Jun 25 16:33:44.034669 systemd[1]: Started cri-containerd-3728828cc7da4948189d42af40215c91a8dbcf3780584248e288dc98da03d347.scope - libcontainer container 3728828cc7da4948189d42af40215c91a8dbcf3780584248e288dc98da03d347. Jun 25 16:33:44.042000 audit: BPF prog-id=64 op=LOAD Jun 25 16:33:44.043000 audit: BPF prog-id=65 op=LOAD Jun 25 16:33:44.043000 audit[2250]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2124 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862363533623764353835343034373566336166393130386461656235 Jun 25 16:33:44.043000 audit: BPF prog-id=66 op=LOAD Jun 25 16:33:44.043000 audit[2250]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2124 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862363533623764353835343034373566336166393130386461656235 Jun 25 16:33:44.043000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:33:44.043000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:33:44.043000 audit: BPF prog-id=67 op=LOAD Jun 25 16:33:44.043000 audit[2250]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2124 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862363533623764353835343034373566336166393130386461656235 Jun 25 16:33:44.049000 audit: BPF prog-id=68 op=LOAD Jun 25 16:33:44.048837 systemd[1]: Started cri-containerd-42c66983982cd6ab19750b13c9eb34e792031526428425a749dadcfc91ec1309.scope - libcontainer container 42c66983982cd6ab19750b13c9eb34e792031526428425a749dadcfc91ec1309. Jun 25 16:33:44.050000 audit: BPF prog-id=69 op=LOAD Jun 25 16:33:44.050000 audit[2251]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2134 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337323838323863633764613439343831383964343261663430323135 Jun 25 16:33:44.050000 audit: BPF prog-id=70 op=LOAD Jun 25 16:33:44.050000 audit[2251]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2134 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337323838323863633764613439343831383964343261663430323135 Jun 25 16:33:44.050000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:33:44.050000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:33:44.050000 audit: BPF prog-id=71 op=LOAD Jun 25 16:33:44.050000 audit[2251]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2134 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337323838323863633764613439343831383964343261663430323135 Jun 25 16:33:44.060000 audit: BPF prog-id=72 op=LOAD Jun 25 16:33:44.060000 audit: BPF prog-id=73 op=LOAD Jun 25 16:33:44.060000 audit[2270]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2122 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633636393833393832636436616231393735306231336339656233 Jun 25 16:33:44.060000 audit: BPF prog-id=74 op=LOAD Jun 25 16:33:44.060000 audit[2270]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2122 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633636393833393832636436616231393735306231336339656233 Jun 25 16:33:44.060000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:33:44.060000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:33:44.060000 audit: BPF prog-id=75 op=LOAD Jun 25 16:33:44.060000 audit[2270]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2122 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:44.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633636393833393832636436616231393735306231336339656233 Jun 25 16:33:44.077514 containerd[1344]: time="2024-06-25T16:33:44.077489008Z" level=info msg="StartContainer for \"3728828cc7da4948189d42af40215c91a8dbcf3780584248e288dc98da03d347\" returns successfully" Jun 25 16:33:44.080815 containerd[1344]: time="2024-06-25T16:33:44.080794264Z" level=info msg="StartContainer for \"8b653b7d58540475f3af9108daeb5bb1917e7a2d33d59ef9a80ffc5b7ad99bc7\" returns successfully" Jun 25 16:33:44.087136 kubelet[2064]: E0625 16:33:44.087115 2064 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Jun 25 16:33:44.093438 containerd[1344]: time="2024-06-25T16:33:44.093407442Z" level=info msg="StartContainer for \"42c66983982cd6ab19750b13c9eb34e792031526428425a749dadcfc91ec1309\" returns successfully" Jun 25 16:33:44.190112 kubelet[2064]: I0625 16:33:44.189914 2064 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:33:44.190112 kubelet[2064]: E0625 16:33:44.190103 2064 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:33:44.255791 kubelet[2064]: W0625 16:33:44.255711 2064 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:44.255791 kubelet[2064]: E0625 16:33:44.255767 2064 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:44.603000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:44.603000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000bdc000 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:33:44.603000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:44.604000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:44.604000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0002131c0 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:33:44.604000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:44.649801 kubelet[2064]: E0625 16:33:44.649722 2064 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:33:45.560000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:45.560000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0060e5410 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:33:45.560000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:33:45.561000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=1041980 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:45.561000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0060e5470 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:33:45.561000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:33:45.569000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=1041986 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:45.569000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:45.569000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=45 a1=c0069ff660 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:33:45.569000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:33:45.569000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:45.569000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=45 a1=c004d8fb90 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:33:45.569000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:33:45.569000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=44 a1=c0060e5cb0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:33:45.569000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:33:45.569000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:45.569000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=44 a1=c005e0e0c0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:33:45.569000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:33:45.695053 kubelet[2064]: E0625 16:33:45.695025 2064 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 16:33:45.791040 kubelet[2064]: I0625 16:33:45.791021 2064 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:33:45.799953 kubelet[2064]: I0625 16:33:45.799926 2064 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:33:45.805480 kubelet[2064]: E0625 16:33:45.805457 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:45.905753 kubelet[2064]: E0625 16:33:45.905673 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.006139 kubelet[2064]: E0625 16:33:46.006108 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.106321 kubelet[2064]: E0625 16:33:46.106298 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.206842 kubelet[2064]: E0625 16:33:46.206758 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.307256 kubelet[2064]: E0625 16:33:46.307215 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.407847 kubelet[2064]: E0625 16:33:46.407824 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.508512 kubelet[2064]: E0625 16:33:46.508444 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.609117 kubelet[2064]: E0625 16:33:46.609101 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.709998 kubelet[2064]: E0625 16:33:46.709974 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.810334 kubelet[2064]: E0625 16:33:46.810315 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:46.910931 kubelet[2064]: E0625 16:33:46.910913 2064 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:33:47.678281 kubelet[2064]: I0625 16:33:47.678260 2064 apiserver.go:52] "Watching apiserver" Jun 25 16:33:47.685789 kubelet[2064]: I0625 16:33:47.685776 2064 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:33:47.951661 systemd[1]: Reloading. Jun 25 16:33:48.060709 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:33:48.073804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:33:48.124068 kernel: kauditd_printk_skb: 158 callbacks suppressed Jun 25 16:33:48.124132 kernel: audit: type=1334 audit(1719333228.119:329): prog-id=76 op=LOAD Jun 25 16:33:48.124148 kernel: audit: type=1334 audit(1719333228.119:330): prog-id=38 op=UNLOAD Jun 25 16:33:48.124161 kernel: audit: type=1334 audit(1719333228.119:331): prog-id=77 op=LOAD Jun 25 16:33:48.124177 kernel: audit: type=1334 audit(1719333228.119:332): prog-id=78 op=LOAD Jun 25 16:33:48.124189 kernel: audit: type=1334 audit(1719333228.119:333): prog-id=39 op=UNLOAD Jun 25 16:33:48.124200 kernel: audit: type=1334 audit(1719333228.119:334): prog-id=40 op=UNLOAD Jun 25 16:33:48.124212 kernel: audit: type=1334 audit(1719333228.119:335): prog-id=79 op=LOAD Jun 25 16:33:48.119000 audit: BPF prog-id=76 op=LOAD Jun 25 16:33:48.119000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:33:48.119000 audit: BPF prog-id=77 op=LOAD Jun 25 16:33:48.119000 audit: BPF prog-id=78 op=LOAD Jun 25 16:33:48.119000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:33:48.119000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:33:48.119000 audit: BPF prog-id=79 op=LOAD Jun 25 16:33:48.125956 kernel: audit: type=1334 audit(1719333228.119:336): prog-id=56 op=UNLOAD Jun 25 16:33:48.125980 kernel: audit: type=1334 audit(1719333228.120:337): prog-id=80 op=LOAD Jun 25 16:33:48.125993 kernel: audit: type=1334 audit(1719333228.120:338): prog-id=72 op=UNLOAD Jun 25 16:33:48.119000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:33:48.120000 audit: BPF prog-id=80 op=LOAD Jun 25 16:33:48.120000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:33:48.120000 audit: BPF prog-id=81 op=LOAD Jun 25 16:33:48.120000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:33:48.121000 audit: BPF prog-id=82 op=LOAD Jun 25 16:33:48.121000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:33:48.121000 audit: BPF prog-id=83 op=LOAD Jun 25 16:33:48.121000 audit: BPF prog-id=84 op=LOAD Jun 25 16:33:48.121000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:33:48.121000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:33:48.122000 audit: BPF prog-id=85 op=LOAD Jun 25 16:33:48.122000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:33:48.122000 audit: BPF prog-id=86 op=LOAD Jun 25 16:33:48.122000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:33:48.123000 audit: BPF prog-id=87 op=LOAD Jun 25 16:33:48.123000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:33:48.123000 audit: BPF prog-id=88 op=LOAD Jun 25 16:33:48.123000 audit: BPF prog-id=89 op=LOAD Jun 25 16:33:48.123000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:33:48.123000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:33:48.124000 audit: BPF prog-id=90 op=LOAD Jun 25 16:33:48.124000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:33:48.124000 audit: BPF prog-id=91 op=LOAD Jun 25 16:33:48.124000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:33:48.125000 audit: BPF prog-id=92 op=LOAD Jun 25 16:33:48.125000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:33:48.125000 audit: BPF prog-id=93 op=LOAD Jun 25 16:33:48.125000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:33:48.125000 audit: BPF prog-id=94 op=LOAD Jun 25 16:33:48.125000 audit: BPF prog-id=95 op=LOAD Jun 25 16:33:48.125000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:33:48.125000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:33:48.134126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:48.154937 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:33:48.155079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:48.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:48.163038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:33:48.326611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:33:48.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:48.411825 kubelet[2425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:33:48.412057 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:33:48.412094 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:33:48.412179 kubelet[2425]: I0625 16:33:48.412157 2425 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:33:48.416623 kubelet[2425]: I0625 16:33:48.415996 2425 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:33:48.416623 kubelet[2425]: I0625 16:33:48.416012 2425 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:33:48.416623 kubelet[2425]: I0625 16:33:48.416161 2425 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:33:48.417480 kubelet[2425]: I0625 16:33:48.417143 2425 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:33:48.418242 kubelet[2425]: I0625 16:33:48.417870 2425 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:33:48.432463 kubelet[2425]: I0625 16:33:48.431925 2425 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:33:48.432463 kubelet[2425]: I0625 16:33:48.432060 2425 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:33:48.432463 kubelet[2425]: I0625 16:33:48.432174 2425 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:33:48.432463 kubelet[2425]: I0625 16:33:48.432188 2425 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:33:48.432463 kubelet[2425]: I0625 16:33:48.432195 2425 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:33:48.432463 kubelet[2425]: I0625 16:33:48.432220 2425 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:33:48.432683 kubelet[2425]: I0625 16:33:48.432276 2425 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:33:48.432683 kubelet[2425]: I0625 16:33:48.432285 2425 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:33:48.432683 kubelet[2425]: I0625 16:33:48.432298 2425 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:33:48.432683 kubelet[2425]: I0625 16:33:48.432308 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:33:48.435865 kubelet[2425]: I0625 16:33:48.435847 2425 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:33:48.436715 kubelet[2425]: I0625 16:33:48.436175 2425 server.go:1232] "Started kubelet" Jun 25 16:33:48.441344 kubelet[2425]: I0625 16:33:48.441324 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:33:48.441946 kubelet[2425]: E0625 16:33:48.441931 2425 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:33:48.441998 kubelet[2425]: E0625 16:33:48.441949 2425 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:33:48.457440 kubelet[2425]: I0625 16:33:48.457414 2425 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:33:48.458068 kubelet[2425]: I0625 16:33:48.458021 2425 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:33:48.458796 kubelet[2425]: I0625 16:33:48.458783 2425 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:33:48.460057 kubelet[2425]: I0625 16:33:48.459033 2425 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:33:48.460057 kubelet[2425]: I0625 16:33:48.459115 2425 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:33:48.462703 kubelet[2425]: I0625 16:33:48.462687 2425 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:33:48.462911 kubelet[2425]: I0625 16:33:48.462904 2425 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:33:48.477611 kubelet[2425]: I0625 16:33:48.477584 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:33:48.478401 kubelet[2425]: I0625 16:33:48.478388 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:33:48.478468 kubelet[2425]: I0625 16:33:48.478462 2425 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:33:48.478524 kubelet[2425]: I0625 16:33:48.478519 2425 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:33:48.478611 kubelet[2425]: E0625 16:33:48.478605 2425 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:33:48.540319 kubelet[2425]: I0625 16:33:48.540272 2425 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:33:48.541621 kubelet[2425]: I0625 16:33:48.541613 2425 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:33:48.541702 kubelet[2425]: I0625 16:33:48.541696 2425 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:33:48.541853 kubelet[2425]: I0625 16:33:48.541846 2425 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:33:48.541909 kubelet[2425]: I0625 16:33:48.541903 2425 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:33:48.541949 kubelet[2425]: I0625 16:33:48.541944 2425 policy_none.go:49] "None policy: Start" Jun 25 16:33:48.542516 kubelet[2425]: I0625 16:33:48.542508 2425 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:33:48.542579 kubelet[2425]: I0625 16:33:48.542573 2425 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:33:48.543111 kubelet[2425]: I0625 16:33:48.542679 2425 state_mem.go:75] "Updated machine memory state" Jun 25 16:33:48.551791 kubelet[2425]: I0625 16:33:48.551774 2425 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:33:48.552197 kubelet[2425]: I0625 16:33:48.552180 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:33:48.560353 kubelet[2425]: I0625 16:33:48.560339 2425 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:33:48.569736 kubelet[2425]: I0625 16:33:48.569693 2425 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 16:33:48.569827 kubelet[2425]: I0625 16:33:48.569748 2425 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:33:48.580686 kubelet[2425]: I0625 16:33:48.579291 2425 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:33:48.580686 kubelet[2425]: I0625 16:33:48.579348 2425 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:33:48.580686 kubelet[2425]: I0625 16:33:48.579371 2425 topology_manager.go:215] "Topology Admit Handler" podUID="72ebbc77bff49ce7ef33389837179585" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:33:48.760201 kubelet[2425]: I0625 16:33:48.760183 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72ebbc77bff49ce7ef33389837179585-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"72ebbc77bff49ce7ef33389837179585\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:33:48.760327 kubelet[2425]: I0625 16:33:48.760320 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:48.760381 kubelet[2425]: I0625 16:33:48.760376 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:48.760429 kubelet[2425]: I0625 16:33:48.760424 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72ebbc77bff49ce7ef33389837179585-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"72ebbc77bff49ce7ef33389837179585\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:33:48.760496 kubelet[2425]: I0625 16:33:48.760490 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72ebbc77bff49ce7ef33389837179585-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"72ebbc77bff49ce7ef33389837179585\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:33:48.760541 kubelet[2425]: I0625 16:33:48.760536 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:48.760587 kubelet[2425]: I0625 16:33:48.760582 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:48.760631 kubelet[2425]: I0625 16:33:48.760627 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:33:48.760680 kubelet[2425]: I0625 16:33:48.760675 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:33:49.435704 kubelet[2425]: I0625 16:33:49.435684 2425 apiserver.go:52] "Watching apiserver" Jun 25 16:33:49.459410 kubelet[2425]: I0625 16:33:49.459392 2425 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:33:49.565157 kubelet[2425]: E0625 16:33:49.565137 2425 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:33:49.568902 kubelet[2425]: I0625 16:33:49.568887 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.567351071 podCreationTimestamp="2024-06-25 16:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:33:49.566262159 +0000 UTC m=+1.234690196" watchObservedRunningTime="2024-06-25 16:33:49.567351071 +0000 UTC m=+1.235779109" Jun 25 16:33:49.634952 kubelet[2425]: I0625 16:33:49.634935 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.634911982 podCreationTimestamp="2024-06-25 16:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:33:49.59892275 +0000 UTC m=+1.267350790" watchObservedRunningTime="2024-06-25 16:33:49.634911982 +0000 UTC m=+1.303340018" Jun 25 16:33:52.664760 sudo[1588]: pam_unix(sudo:session): session closed for user root Jun 25 16:33:52.664000 audit[1588]: USER_END pid=1588 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:52.664000 audit[1588]: CRED_DISP pid=1588 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:33:52.668057 sshd[1585]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:52.668000 audit[1585]: USER_END pid=1585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:52.668000 audit[1585]: CRED_DISP pid=1585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:33:52.670122 systemd-logind[1326]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:33:52.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:33840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:52.670987 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:33840.service: Deactivated successfully. Jun 25 16:33:52.671473 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:33:52.671582 systemd[1]: session-9.scope: Consumed 3.124s CPU time. Jun 25 16:33:52.671902 systemd-logind[1326]: Removed session 9. Jun 25 16:33:57.408919 kubelet[2425]: I0625 16:33:57.408899 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.40887934 podCreationTimestamp="2024-06-25 16:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:33:49.635459392 +0000 UTC m=+1.303887431" watchObservedRunningTime="2024-06-25 16:33:57.40887934 +0000 UTC m=+9.077307379" Jun 25 16:34:00.731296 kernel: kauditd_printk_skb: 37 callbacks suppressed Jun 25 16:34:00.731380 kernel: audit: type=1400 audit(1719333240.728:376): avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.728000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.728000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00082e8c0 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.734250 kernel: audit: type=1300 audit(1719333240.728:376): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00082e8c0 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.734289 kernel: audit: type=1327 audit(1719333240.728:376): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:00.728000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:00.728000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.738018 kernel: audit: type=1400 audit(1719333240.728:377): avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.738056 kernel: audit: type=1300 audit(1719333240.728:377): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00082ea80 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.728000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00082ea80 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.728000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:00.742232 kernel: audit: type=1327 audit(1719333240.728:377): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:00.742267 kernel: audit: type=1400 audit(1719333240.729:378): avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.729000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.729000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00082ec40 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.746246 kernel: audit: type=1300 audit(1719333240.729:378): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00082ec40 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.746279 kernel: audit: type=1327 audit(1719333240.729:378): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:00.729000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:00.729000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.749949 kernel: audit: type=1400 audit(1719333240.729:379): avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:00.729000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00082ee00 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.729000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:00.737000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=521018 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:34:00.737000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009c96c0 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:00.737000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:01.377909 kubelet[2425]: I0625 16:34:01.377895 2425 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:34:01.378363 containerd[1344]: time="2024-06-25T16:34:01.378336428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:34:01.378492 kubelet[2425]: I0625 16:34:01.378466 2425 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:34:01.717123 kubelet[2425]: I0625 16:34:01.717058 2425 topology_manager.go:215] "Topology Admit Handler" podUID="eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d" podNamespace="kube-system" podName="kube-proxy-phzxq" Jun 25 16:34:01.721498 systemd[1]: Created slice kubepods-besteffort-podeb1213a7_f8a0_4bfb_b0e9_5f98424f9f9d.slice - libcontainer container kubepods-besteffort-podeb1213a7_f8a0_4bfb_b0e9_5f98424f9f9d.slice. Jun 25 16:34:01.732799 kubelet[2425]: I0625 16:34:01.732779 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d-kube-proxy\") pod \"kube-proxy-phzxq\" (UID: \"eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d\") " pod="kube-system/kube-proxy-phzxq" Jun 25 16:34:01.732888 kubelet[2425]: I0625 16:34:01.732825 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d-lib-modules\") pod \"kube-proxy-phzxq\" (UID: \"eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d\") " pod="kube-system/kube-proxy-phzxq" Jun 25 16:34:01.732888 kubelet[2425]: I0625 16:34:01.732841 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d-xtables-lock\") pod \"kube-proxy-phzxq\" (UID: \"eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d\") " pod="kube-system/kube-proxy-phzxq" Jun 25 16:34:01.732888 kubelet[2425]: I0625 16:34:01.732855 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2q78\" (UniqueName: \"kubernetes.io/projected/eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d-kube-api-access-s2q78\") pod \"kube-proxy-phzxq\" (UID: \"eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d\") " pod="kube-system/kube-proxy-phzxq" Jun 25 16:34:01.837398 kubelet[2425]: E0625 16:34:01.837378 2425 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 16:34:01.837500 kubelet[2425]: E0625 16:34:01.837494 2425 projected.go:198] Error preparing data for projected volume kube-api-access-s2q78 for pod kube-system/kube-proxy-phzxq: configmap "kube-root-ca.crt" not found Jun 25 16:34:01.838022 kubelet[2425]: E0625 16:34:01.838013 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d-kube-api-access-s2q78 podName:eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d nodeName:}" failed. No retries permitted until 2024-06-25 16:34:02.337552039 +0000 UTC m=+14.005980072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2q78" (UniqueName: "kubernetes.io/projected/eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d-kube-api-access-s2q78") pod "kube-proxy-phzxq" (UID: "eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d") : configmap "kube-root-ca.crt" not found Jun 25 16:34:02.215253 kubelet[2425]: I0625 16:34:02.215234 2425 topology_manager.go:215] "Topology Admit Handler" podUID="f206ee85-045a-44bf-b961-efd67e7585d0" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-6gs5f" Jun 25 16:34:02.218518 systemd[1]: Created slice kubepods-besteffort-podf206ee85_045a_44bf_b961_efd67e7585d0.slice - libcontainer container kubepods-besteffort-podf206ee85_045a_44bf_b961_efd67e7585d0.slice. Jun 25 16:34:02.235128 kubelet[2425]: I0625 16:34:02.235099 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f206ee85-045a-44bf-b961-efd67e7585d0-var-lib-calico\") pod \"tigera-operator-76c4974c85-6gs5f\" (UID: \"f206ee85-045a-44bf-b961-efd67e7585d0\") " pod="tigera-operator/tigera-operator-76c4974c85-6gs5f" Jun 25 16:34:02.235220 kubelet[2425]: I0625 16:34:02.235136 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7m5v\" (UniqueName: \"kubernetes.io/projected/f206ee85-045a-44bf-b961-efd67e7585d0-kube-api-access-h7m5v\") pod \"tigera-operator-76c4974c85-6gs5f\" (UID: \"f206ee85-045a-44bf-b961-efd67e7585d0\") " pod="tigera-operator/tigera-operator-76c4974c85-6gs5f" Jun 25 16:34:02.521322 containerd[1344]: time="2024-06-25T16:34:02.521242772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-6gs5f,Uid:f206ee85-045a-44bf-b961-efd67e7585d0,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:34:02.534248 containerd[1344]: time="2024-06-25T16:34:02.534197483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:02.534352 containerd[1344]: time="2024-06-25T16:34:02.534242184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:02.534352 containerd[1344]: time="2024-06-25T16:34:02.534257225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:02.534352 containerd[1344]: time="2024-06-25T16:34:02.534265939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:02.547824 systemd[1]: Started cri-containerd-1fed61b3b7573ed560e17f8351f6fc0fcc460768f66ad2ec286621ffe1b02964.scope - libcontainer container 1fed61b3b7573ed560e17f8351f6fc0fcc460768f66ad2ec286621ffe1b02964. Jun 25 16:34:02.553000 audit: BPF prog-id=96 op=LOAD Jun 25 16:34:02.553000 audit: BPF prog-id=97 op=LOAD Jun 25 16:34:02.553000 audit[2521]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2511 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656436316233623735373365643536306531376638333531663666 Jun 25 16:34:02.553000 audit: BPF prog-id=98 op=LOAD Jun 25 16:34:02.553000 audit[2521]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2511 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656436316233623735373365643536306531376638333531663666 Jun 25 16:34:02.553000 audit: BPF prog-id=98 op=UNLOAD Jun 25 16:34:02.553000 audit: BPF prog-id=97 op=UNLOAD Jun 25 16:34:02.553000 audit: BPF prog-id=99 op=LOAD Jun 25 16:34:02.553000 audit[2521]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2511 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656436316233623735373365643536306531376638333531663666 Jun 25 16:34:02.574735 containerd[1344]: time="2024-06-25T16:34:02.574692851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-6gs5f,Uid:f206ee85-045a-44bf-b961-efd67e7585d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1fed61b3b7573ed560e17f8351f6fc0fcc460768f66ad2ec286621ffe1b02964\"" Jun 25 16:34:02.576033 containerd[1344]: time="2024-06-25T16:34:02.576014918Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:34:02.629412 containerd[1344]: time="2024-06-25T16:34:02.629387624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phzxq,Uid:eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d,Namespace:kube-system,Attempt:0,}" Jun 25 16:34:02.641690 containerd[1344]: time="2024-06-25T16:34:02.641608022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:02.641690 containerd[1344]: time="2024-06-25T16:34:02.641637193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:02.641690 containerd[1344]: time="2024-06-25T16:34:02.641649644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:02.642052 containerd[1344]: time="2024-06-25T16:34:02.641673868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:02.652827 systemd[1]: Started cri-containerd-6b94b7e93c06694873011e501297cb5339a51202272d1f583948c392994fb9a9.scope - libcontainer container 6b94b7e93c06694873011e501297cb5339a51202272d1f583948c392994fb9a9. Jun 25 16:34:02.657000 audit: BPF prog-id=100 op=LOAD Jun 25 16:34:02.657000 audit: BPF prog-id=101 op=LOAD Jun 25 16:34:02.657000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2552 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393462376539336330363639343837333031316535303132393763 Jun 25 16:34:02.657000 audit: BPF prog-id=102 op=LOAD Jun 25 16:34:02.657000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2552 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393462376539336330363639343837333031316535303132393763 Jun 25 16:34:02.658000 audit: BPF prog-id=102 op=UNLOAD Jun 25 16:34:02.658000 audit: BPF prog-id=101 op=UNLOAD Jun 25 16:34:02.658000 audit: BPF prog-id=103 op=LOAD Jun 25 16:34:02.658000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2552 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393462376539336330363639343837333031316535303132393763 Jun 25 16:34:02.666296 containerd[1344]: time="2024-06-25T16:34:02.666272738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phzxq,Uid:eb1213a7-f8a0-4bfb-b0e9-5f98424f9f9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b94b7e93c06694873011e501297cb5339a51202272d1f583948c392994fb9a9\"" Jun 25 16:34:02.668258 containerd[1344]: time="2024-06-25T16:34:02.668167485Z" level=info msg="CreateContainer within sandbox \"6b94b7e93c06694873011e501297cb5339a51202272d1f583948c392994fb9a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:34:02.676167 containerd[1344]: time="2024-06-25T16:34:02.676139154Z" level=info msg="CreateContainer within sandbox \"6b94b7e93c06694873011e501297cb5339a51202272d1f583948c392994fb9a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e50e8a0be56ac22746834d664998c894958be84e127b952873b42788d19755ea\"" Jun 25 16:34:02.677281 containerd[1344]: time="2024-06-25T16:34:02.677259848Z" level=info msg="StartContainer for \"e50e8a0be56ac22746834d664998c894958be84e127b952873b42788d19755ea\"" Jun 25 16:34:02.692823 systemd[1]: Started cri-containerd-e50e8a0be56ac22746834d664998c894958be84e127b952873b42788d19755ea.scope - libcontainer container e50e8a0be56ac22746834d664998c894958be84e127b952873b42788d19755ea. Jun 25 16:34:02.700000 audit: BPF prog-id=104 op=LOAD Jun 25 16:34:02.700000 audit[2593]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2552 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535306538613062653536616332323734363833346436363439393863 Jun 25 16:34:02.700000 audit: BPF prog-id=105 op=LOAD Jun 25 16:34:02.700000 audit[2593]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2552 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535306538613062653536616332323734363833346436363439393863 Jun 25 16:34:02.700000 audit: BPF prog-id=105 op=UNLOAD Jun 25 16:34:02.700000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:34:02.700000 audit: BPF prog-id=106 op=LOAD Jun 25 16:34:02.700000 audit[2593]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2552 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:02.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535306538613062653536616332323734363833346436363439393863 Jun 25 16:34:02.708742 containerd[1344]: time="2024-06-25T16:34:02.708710076Z" level=info msg="StartContainer for \"e50e8a0be56ac22746834d664998c894958be84e127b952873b42788d19755ea\" returns successfully" Jun 25 16:34:03.007000 audit[2644]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.007000 audit[2644]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc06d41bf0 a2=0 a3=7ffc06d41bdc items=0 ppid=2604 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:34:03.008000 audit[2645]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.008000 audit[2645]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffab5fd140 a2=0 a3=7fffab5fd12c items=0 ppid=2604 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:34:03.010000 audit[2647]: NETFILTER_CFG table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.010000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff53bd19d0 a2=0 a3=7fff53bd19bc items=0 ppid=2604 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:34:03.010000 audit[2646]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.010000 audit[2646]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf9e60cf0 a2=0 a3=7ffdf9e60cdc items=0 ppid=2604 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:34:03.011000 audit[2648]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.011000 audit[2648]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe77eb5050 a2=0 a3=7ffe77eb503c items=0 ppid=2604 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:34:03.011000 audit[2649]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.011000 audit[2649]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8527ab50 a2=0 a3=7ffc8527ab3c items=0 ppid=2604 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:34:03.115000 audit[2650]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.115000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd39880570 a2=0 a3=7ffd3988055c items=0 ppid=2604 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:34:03.119000 audit[2652]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.119000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcd2a39e10 a2=0 a3=7ffcd2a39dfc items=0 ppid=2604 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:34:03.122000 audit[2655]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2655 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.122000 audit[2655]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffec39009c0 a2=0 a3=7ffec39009ac items=0 ppid=2604 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:34:03.123000 audit[2656]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.123000 audit[2656]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7cf2b780 a2=0 a3=7ffc7cf2b76c items=0 ppid=2604 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:34:03.125000 audit[2658]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2658 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.125000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea5329740 a2=0 a3=7ffea532972c items=0 ppid=2604 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:34:03.125000 audit[2659]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.125000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe74931ae0 a2=0 a3=7ffe74931acc items=0 ppid=2604 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:34:03.127000 audit[2661]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.127000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdebc3c9e0 a2=0 a3=7ffdebc3c9cc items=0 ppid=2604 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:34:03.129000 audit[2664]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.129000 audit[2664]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5a15d840 a2=0 a3=7ffd5a15d82c items=0 ppid=2604 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:34:03.130000 audit[2665]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.130000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd0d8ec90 a2=0 a3=7ffcd0d8ec7c items=0 ppid=2604 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:34:03.132000 audit[2667]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.132000 audit[2667]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc0e8a1f00 a2=0 a3=7ffc0e8a1eec items=0 ppid=2604 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:34:03.133000 audit[2668]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.133000 audit[2668]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbd1743d0 a2=0 a3=7ffdbd1743bc items=0 ppid=2604 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:34:03.134000 audit[2670]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.134000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc279906b0 a2=0 a3=7ffc2799069c items=0 ppid=2604 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.134000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:34:03.137000 audit[2673]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.137000 audit[2673]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe1063ce0 a2=0 a3=7fffe1063ccc items=0 ppid=2604 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.137000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:34:03.140000 audit[2676]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.140000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff799ed910 a2=0 a3=7fff799ed8fc items=0 ppid=2604 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:34:03.141000 audit[2677]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.141000 audit[2677]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffdb1394f0 a2=0 a3=7fffdb1394dc items=0 ppid=2604 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:34:03.144000 audit[2679]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.144000 audit[2679]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc8fb625f0 a2=0 a3=7ffc8fb625dc items=0 ppid=2604 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:34:03.146000 audit[2682]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.146000 audit[2682]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2ddb4610 a2=0 a3=7ffd2ddb45fc items=0 ppid=2604 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:34:03.147000 audit[2683]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.147000 audit[2683]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd8844c50 a2=0 a3=7fffd8844c3c items=0 ppid=2604 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:34:03.149000 audit[2685]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:34:03.149000 audit[2685]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdcf081380 a2=0 a3=7ffdcf08136c items=0 ppid=2604 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:34:03.161000 audit[2691]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:03.161000 audit[2691]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff8564cea0 a2=0 a3=7fff8564ce8c items=0 ppid=2604 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:03.164000 audit[2691]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:03.164000 audit[2691]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff8564cea0 a2=0 a3=7fff8564ce8c items=0 ppid=2604 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:03.167000 audit[2697]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.167000 audit[2697]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd52ca5720 a2=0 a3=7ffd52ca570c items=0 ppid=2604 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.167000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:34:03.169000 audit[2699]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.169000 audit[2699]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd1ae05090 a2=0 a3=7ffd1ae0507c items=0 ppid=2604 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:34:03.172000 audit[2702]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.172000 audit[2702]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffde2834160 a2=0 a3=7ffde283414c items=0 ppid=2604 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.172000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:34:03.172000 audit[2703]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.172000 audit[2703]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1a2188f0 a2=0 a3=7fff1a2188dc items=0 ppid=2604 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.172000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:34:03.174000 audit[2705]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2705 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.174000 audit[2705]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7d610670 a2=0 a3=7ffd7d61065c items=0 ppid=2604 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:34:03.175000 audit[2706]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2706 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.175000 audit[2706]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd7325380 a2=0 a3=7ffdd732536c items=0 ppid=2604 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:34:03.176000 audit[2708]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2708 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.176000 audit[2708]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc1e56a2c0 a2=0 a3=7ffc1e56a2ac items=0 ppid=2604 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:34:03.179000 audit[2711]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.179000 audit[2711]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff82190af0 a2=0 a3=7fff82190adc items=0 ppid=2604 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:34:03.180000 audit[2712]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2712 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.180000 audit[2712]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe435cb160 a2=0 a3=7ffe435cb14c items=0 ppid=2604 pid=2712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:34:03.181000 audit[2714]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2714 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.181000 audit[2714]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd88cedc0 a2=0 a3=7fffd88cedac items=0 ppid=2604 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:34:03.182000 audit[2715]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.182000 audit[2715]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4d177320 a2=0 a3=7fff4d17730c items=0 ppid=2604 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.182000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:34:03.184000 audit[2717]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.184000 audit[2717]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6ead0940 a2=0 a3=7fff6ead092c items=0 ppid=2604 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:34:03.186000 audit[2720]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.186000 audit[2720]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc04881020 a2=0 a3=7ffc0488100c items=0 ppid=2604 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:34:03.189000 audit[2723]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.189000 audit[2723]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc415197a0 a2=0 a3=7ffc4151978c items=0 ppid=2604 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:34:03.189000 audit[2724]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.189000 audit[2724]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd17ca4e70 a2=0 a3=7ffd17ca4e5c items=0 ppid=2604 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:34:03.191000 audit[2726]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2726 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.191000 audit[2726]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff23d05210 a2=0 a3=7fff23d051fc items=0 ppid=2604 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:34:03.193000 audit[2729]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2729 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.193000 audit[2729]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffed14b0c00 a2=0 a3=7ffed14b0bec items=0 ppid=2604 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:34:03.194000 audit[2730]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.194000 audit[2730]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb2eccf30 a2=0 a3=7ffeb2eccf1c items=0 ppid=2604 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:34:03.196000 audit[2732]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.196000 audit[2732]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd4e99b5b0 a2=0 a3=7ffd4e99b59c items=0 ppid=2604 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:34:03.197000 audit[2733]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.197000 audit[2733]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb25772e0 a2=0 a3=7fffb25772cc items=0 ppid=2604 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:34:03.198000 audit[2735]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.198000 audit[2735]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffbe28ad10 a2=0 a3=7fffbe28acfc items=0 ppid=2604 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:34:03.200000 audit[2738]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:34:03.200000 audit[2738]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc25367f60 a2=0 a3=7ffc25367f4c items=0 ppid=2604 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:34:03.202000 audit[2740]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:34:03.202000 audit[2740]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffe435520d0 a2=0 a3=7ffe435520bc items=0 ppid=2604 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.202000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:03.203000 audit[2740]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:34:03.203000 audit[2740]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe435520d0 a2=0 a3=7ffe435520bc items=0 ppid=2604 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:03.203000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:03.999347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522903789.mount: Deactivated successfully. Jun 25 16:34:04.000837 kubelet[2425]: I0625 16:34:04.000788 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-phzxq" podStartSLOduration=3.000761131 podCreationTimestamp="2024-06-25 16:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:34:03.542372136 +0000 UTC m=+15.210800176" watchObservedRunningTime="2024-06-25 16:34:04.000761131 +0000 UTC m=+15.669189170" Jun 25 16:34:04.416016 containerd[1344]: time="2024-06-25T16:34:04.415983338Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:04.416764 containerd[1344]: time="2024-06-25T16:34:04.416738730Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076048" Jun 25 16:34:04.417572 containerd[1344]: time="2024-06-25T16:34:04.417556710Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:04.421564 containerd[1344]: time="2024-06-25T16:34:04.421543413Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:04.422264 containerd[1344]: time="2024-06-25T16:34:04.422246585Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:04.422783 containerd[1344]: time="2024-06-25T16:34:04.422767609Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.846672547s" Jun 25 16:34:04.422850 containerd[1344]: time="2024-06-25T16:34:04.422833274Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:34:04.423872 containerd[1344]: time="2024-06-25T16:34:04.423858579Z" level=info msg="CreateContainer within sandbox \"1fed61b3b7573ed560e17f8351f6fc0fcc460768f66ad2ec286621ffe1b02964\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:34:04.430189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900195357.mount: Deactivated successfully. Jun 25 16:34:04.433072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394989184.mount: Deactivated successfully. Jun 25 16:34:04.437368 containerd[1344]: time="2024-06-25T16:34:04.437344730Z" level=info msg="CreateContainer within sandbox \"1fed61b3b7573ed560e17f8351f6fc0fcc460768f66ad2ec286621ffe1b02964\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e265660511dceca35245aac6ea5ffaee295e440c8fca18364651b036d626596b\"" Jun 25 16:34:04.437770 containerd[1344]: time="2024-06-25T16:34:04.437757478Z" level=info msg="StartContainer for \"e265660511dceca35245aac6ea5ffaee295e440c8fca18364651b036d626596b\"" Jun 25 16:34:04.458874 systemd[1]: Started cri-containerd-e265660511dceca35245aac6ea5ffaee295e440c8fca18364651b036d626596b.scope - libcontainer container e265660511dceca35245aac6ea5ffaee295e440c8fca18364651b036d626596b. Jun 25 16:34:04.466000 audit: BPF prog-id=107 op=LOAD Jun 25 16:34:04.466000 audit: BPF prog-id=108 op=LOAD Jun 25 16:34:04.466000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2511 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:04.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363536363035313164636563613335323435616163366561356666 Jun 25 16:34:04.466000 audit: BPF prog-id=109 op=LOAD Jun 25 16:34:04.466000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2511 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:04.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363536363035313164636563613335323435616163366561356666 Jun 25 16:34:04.466000 audit: BPF prog-id=109 op=UNLOAD Jun 25 16:34:04.466000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:34:04.466000 audit: BPF prog-id=110 op=LOAD Jun 25 16:34:04.466000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2511 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:04.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363536363035313164636563613335323435616163366561356666 Jun 25 16:34:04.474801 containerd[1344]: time="2024-06-25T16:34:04.474704389Z" level=info msg="StartContainer for \"e265660511dceca35245aac6ea5ffaee295e440c8fca18364651b036d626596b\" returns successfully" Jun 25 16:34:04.550156 kubelet[2425]: I0625 16:34:04.550095 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-6gs5f" podStartSLOduration=0.70259975 podCreationTimestamp="2024-06-25 16:34:02 +0000 UTC" firstStartedPulling="2024-06-25 16:34:02.57556277 +0000 UTC m=+14.243990808" lastFinishedPulling="2024-06-25 16:34:04.423032579 +0000 UTC m=+16.091460615" observedRunningTime="2024-06-25 16:34:04.54987771 +0000 UTC m=+16.218305752" watchObservedRunningTime="2024-06-25 16:34:04.550069557 +0000 UTC m=+16.218497599" Jun 25 16:34:07.090000 audit[2789]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.091982 kernel: kauditd_printk_skb: 205 callbacks suppressed Jun 25 16:34:07.092021 kernel: audit: type=1325 audit(1719333247.090:455): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.090000 audit[2789]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdb897ac50 a2=0 a3=7ffdb897ac3c items=0 ppid=2604 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.095325 kernel: audit: type=1300 audit(1719333247.090:455): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdb897ac50 a2=0 a3=7ffdb897ac3c items=0 ppid=2604 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.095367 kernel: audit: type=1327 audit(1719333247.090:455): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:07.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:07.096773 kernel: audit: type=1325 audit(1719333247.095:456): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.095000 audit[2789]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.095000 audit[2789]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb897ac50 a2=0 a3=0 items=0 ppid=2604 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.100020 kernel: audit: type=1300 audit(1719333247.095:456): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb897ac50 a2=0 a3=0 items=0 ppid=2604 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.100054 kernel: audit: type=1327 audit(1719333247.095:456): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:07.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:07.099000 audit[2791]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.099000 audit[2791]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe8eb6edf0 a2=0 a3=7ffe8eb6eddc items=0 ppid=2604 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.110276 kernel: audit: type=1325 audit(1719333247.099:457): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.110308 kernel: audit: type=1300 audit(1719333247.099:457): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe8eb6edf0 a2=0 a3=7ffe8eb6eddc items=0 ppid=2604 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.110322 kernel: audit: type=1327 audit(1719333247.099:457): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:07.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:07.106000 audit[2791]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.106000 audit[2791]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8eb6edf0 a2=0 a3=0 items=0 ppid=2604 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.106000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:07.117735 kernel: audit: type=1325 audit(1719333247.106:458): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:07.207444 kubelet[2425]: I0625 16:34:07.207416 2425 topology_manager.go:215] "Topology Admit Handler" podUID="27fee3a4-863f-430f-8132-19d12fe9a130" podNamespace="calico-system" podName="calico-typha-5c76bd779b-2v28w" Jun 25 16:34:07.214206 systemd[1]: Created slice kubepods-besteffort-pod27fee3a4_863f_430f_8132_19d12fe9a130.slice - libcontainer container kubepods-besteffort-pod27fee3a4_863f_430f_8132_19d12fe9a130.slice. Jun 25 16:34:07.254483 kubelet[2425]: I0625 16:34:07.254458 2425 topology_manager.go:215] "Topology Admit Handler" podUID="84184d69-5834-4bc0-a405-94e852ba73a1" podNamespace="calico-system" podName="calico-node-p67fj" Jun 25 16:34:07.258156 systemd[1]: Created slice kubepods-besteffort-pod84184d69_5834_4bc0_a405_94e852ba73a1.slice - libcontainer container kubepods-besteffort-pod84184d69_5834_4bc0_a405_94e852ba73a1.slice. Jun 25 16:34:07.362759 kubelet[2425]: I0625 16:34:07.362689 2425 topology_manager.go:215] "Topology Admit Handler" podUID="8da5006e-233c-4549-81b3-ec063a911736" podNamespace="calico-system" podName="csi-node-driver-z6zw6" Jun 25 16:34:07.363053 kubelet[2425]: E0625 16:34:07.363043 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6zw6" podUID="8da5006e-233c-4549-81b3-ec063a911736" Jun 25 16:34:07.364583 kubelet[2425]: I0625 16:34:07.364565 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-policysync\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.364661 kubelet[2425]: I0625 16:34:07.364655 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-cni-log-dir\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.364720 kubelet[2425]: I0625 16:34:07.364715 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84184d69-5834-4bc0-a405-94e852ba73a1-tigera-ca-bundle\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.364787 kubelet[2425]: I0625 16:34:07.364777 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-var-run-calico\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.364843 kubelet[2425]: I0625 16:34:07.364837 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-var-lib-calico\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.364897 kubelet[2425]: I0625 16:34:07.364886 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-flexvol-driver-host\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.365022 kubelet[2425]: I0625 16:34:07.365015 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8vt\" (UniqueName: \"kubernetes.io/projected/27fee3a4-863f-430f-8132-19d12fe9a130-kube-api-access-jg8vt\") pod \"calico-typha-5c76bd779b-2v28w\" (UID: \"27fee3a4-863f-430f-8132-19d12fe9a130\") " pod="calico-system/calico-typha-5c76bd779b-2v28w" Jun 25 16:34:07.365078 kubelet[2425]: I0625 16:34:07.365074 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-cni-bin-dir\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.365126 kubelet[2425]: I0625 16:34:07.365121 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/27fee3a4-863f-430f-8132-19d12fe9a130-typha-certs\") pod \"calico-typha-5c76bd779b-2v28w\" (UID: \"27fee3a4-863f-430f-8132-19d12fe9a130\") " pod="calico-system/calico-typha-5c76bd779b-2v28w" Jun 25 16:34:07.365179 kubelet[2425]: I0625 16:34:07.365174 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/84184d69-5834-4bc0-a405-94e852ba73a1-node-certs\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.365234 kubelet[2425]: I0625 16:34:07.365224 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-lib-modules\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.365625 kubelet[2425]: I0625 16:34:07.365413 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-xtables-lock\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.365625 kubelet[2425]: I0625 16:34:07.365439 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blzj2\" (UniqueName: \"kubernetes.io/projected/84184d69-5834-4bc0-a405-94e852ba73a1-kube-api-access-blzj2\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.365625 kubelet[2425]: I0625 16:34:07.365454 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/84184d69-5834-4bc0-a405-94e852ba73a1-cni-net-dir\") pod \"calico-node-p67fj\" (UID: \"84184d69-5834-4bc0-a405-94e852ba73a1\") " pod="calico-system/calico-node-p67fj" Jun 25 16:34:07.365625 kubelet[2425]: I0625 16:34:07.365510 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fee3a4-863f-430f-8132-19d12fe9a130-tigera-ca-bundle\") pod \"calico-typha-5c76bd779b-2v28w\" (UID: \"27fee3a4-863f-430f-8132-19d12fe9a130\") " pod="calico-system/calico-typha-5c76bd779b-2v28w" Jun 25 16:34:07.465669 kubelet[2425]: I0625 16:34:07.465650 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8da5006e-233c-4549-81b3-ec063a911736-kubelet-dir\") pod \"csi-node-driver-z6zw6\" (UID: \"8da5006e-233c-4549-81b3-ec063a911736\") " pod="calico-system/csi-node-driver-z6zw6" Jun 25 16:34:07.465872 kubelet[2425]: I0625 16:34:07.465857 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8da5006e-233c-4549-81b3-ec063a911736-socket-dir\") pod \"csi-node-driver-z6zw6\" (UID: \"8da5006e-233c-4549-81b3-ec063a911736\") " pod="calico-system/csi-node-driver-z6zw6" Jun 25 16:34:07.465944 kubelet[2425]: I0625 16:34:07.465937 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8da5006e-233c-4549-81b3-ec063a911736-registration-dir\") pod \"csi-node-driver-z6zw6\" (UID: \"8da5006e-233c-4549-81b3-ec063a911736\") " pod="calico-system/csi-node-driver-z6zw6" Jun 25 16:34:07.466121 kubelet[2425]: I0625 16:34:07.466106 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8da5006e-233c-4549-81b3-ec063a911736-varrun\") pod \"csi-node-driver-z6zw6\" (UID: \"8da5006e-233c-4549-81b3-ec063a911736\") " pod="calico-system/csi-node-driver-z6zw6" Jun 25 16:34:07.466201 kubelet[2425]: I0625 16:34:07.466195 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slktc\" (UniqueName: \"kubernetes.io/projected/8da5006e-233c-4549-81b3-ec063a911736-kube-api-access-slktc\") pod \"csi-node-driver-z6zw6\" (UID: \"8da5006e-233c-4549-81b3-ec063a911736\") " pod="calico-system/csi-node-driver-z6zw6" Jun 25 16:34:07.483960 kubelet[2425]: E0625 16:34:07.483948 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.484094 kubelet[2425]: W0625 16:34:07.484080 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.484172 kubelet[2425]: E0625 16:34:07.484166 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.495918 kubelet[2425]: E0625 16:34:07.495901 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.496024 kubelet[2425]: W0625 16:34:07.496015 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.496084 kubelet[2425]: E0625 16:34:07.496079 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.503038 kubelet[2425]: E0625 16:34:07.503029 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.503127 kubelet[2425]: W0625 16:34:07.503120 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.503202 kubelet[2425]: E0625 16:34:07.503197 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.526041 containerd[1344]: time="2024-06-25T16:34:07.526013779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c76bd779b-2v28w,Uid:27fee3a4-863f-430f-8132-19d12fe9a130,Namespace:calico-system,Attempt:0,}" Jun 25 16:34:07.560635 containerd[1344]: time="2024-06-25T16:34:07.560470282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p67fj,Uid:84184d69-5834-4bc0-a405-94e852ba73a1,Namespace:calico-system,Attempt:0,}" Jun 25 16:34:07.568030 kubelet[2425]: E0625 16:34:07.567309 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568030 kubelet[2425]: W0625 16:34:07.567321 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568030 kubelet[2425]: E0625 16:34:07.567349 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568030 kubelet[2425]: E0625 16:34:07.567480 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568030 kubelet[2425]: W0625 16:34:07.567496 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568030 kubelet[2425]: E0625 16:34:07.567506 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568030 kubelet[2425]: E0625 16:34:07.567599 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568030 kubelet[2425]: W0625 16:34:07.567603 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568030 kubelet[2425]: E0625 16:34:07.567610 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568030 kubelet[2425]: E0625 16:34:07.567694 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568268 kubelet[2425]: W0625 16:34:07.567698 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568268 kubelet[2425]: E0625 16:34:07.567705 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568268 kubelet[2425]: E0625 16:34:07.567805 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568268 kubelet[2425]: W0625 16:34:07.567812 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568268 kubelet[2425]: E0625 16:34:07.567818 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568268 kubelet[2425]: E0625 16:34:07.567916 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568268 kubelet[2425]: W0625 16:34:07.567920 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568268 kubelet[2425]: E0625 16:34:07.567926 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568268 kubelet[2425]: E0625 16:34:07.568011 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568268 kubelet[2425]: W0625 16:34:07.568015 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568465 kubelet[2425]: E0625 16:34:07.568029 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568465 kubelet[2425]: E0625 16:34:07.568106 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568465 kubelet[2425]: W0625 16:34:07.568111 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568465 kubelet[2425]: E0625 16:34:07.568117 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568465 kubelet[2425]: E0625 16:34:07.568209 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568465 kubelet[2425]: W0625 16:34:07.568213 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568465 kubelet[2425]: E0625 16:34:07.568218 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.568465 kubelet[2425]: E0625 16:34:07.568446 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.568465 kubelet[2425]: W0625 16:34:07.568451 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.568465 kubelet[2425]: E0625 16:34:07.568457 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.571356 kubelet[2425]: E0625 16:34:07.571341 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.571356 kubelet[2425]: W0625 16:34:07.571352 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.571447 kubelet[2425]: E0625 16:34:07.571365 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.572718 kubelet[2425]: E0625 16:34:07.571800 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.572718 kubelet[2425]: W0625 16:34:07.571808 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.572718 kubelet[2425]: E0625 16:34:07.571817 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.572994 kubelet[2425]: E0625 16:34:07.572981 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.572994 kubelet[2425]: W0625 16:34:07.572988 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.572994 kubelet[2425]: E0625 16:34:07.572996 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.573112 kubelet[2425]: E0625 16:34:07.573094 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.573112 kubelet[2425]: W0625 16:34:07.573109 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.573156 kubelet[2425]: E0625 16:34:07.573119 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.573213 kubelet[2425]: E0625 16:34:07.573203 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.573213 kubelet[2425]: W0625 16:34:07.573210 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.573254 kubelet[2425]: E0625 16:34:07.573218 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.573308 kubelet[2425]: E0625 16:34:07.573299 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.573308 kubelet[2425]: W0625 16:34:07.573306 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.573348 kubelet[2425]: E0625 16:34:07.573312 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.573427 kubelet[2425]: E0625 16:34:07.573418 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.573427 kubelet[2425]: W0625 16:34:07.573424 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.573472 kubelet[2425]: E0625 16:34:07.573430 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.573789 kubelet[2425]: E0625 16:34:07.573584 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.573789 kubelet[2425]: W0625 16:34:07.573600 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.573789 kubelet[2425]: E0625 16:34:07.573608 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.573933 kubelet[2425]: E0625 16:34:07.573924 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.573933 kubelet[2425]: W0625 16:34:07.573930 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.573978 kubelet[2425]: E0625 16:34:07.573937 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.574741 kubelet[2425]: E0625 16:34:07.574069 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.574741 kubelet[2425]: W0625 16:34:07.574075 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.574741 kubelet[2425]: E0625 16:34:07.574082 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.577676 kubelet[2425]: E0625 16:34:07.577604 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.577676 kubelet[2425]: W0625 16:34:07.577613 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.577676 kubelet[2425]: E0625 16:34:07.577625 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.577902 kubelet[2425]: E0625 16:34:07.577852 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.577902 kubelet[2425]: W0625 16:34:07.577858 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.577902 kubelet[2425]: E0625 16:34:07.577865 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.579248 kubelet[2425]: E0625 16:34:07.579178 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.579248 kubelet[2425]: W0625 16:34:07.579188 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.579248 kubelet[2425]: E0625 16:34:07.579203 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.579418 kubelet[2425]: E0625 16:34:07.579351 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.579418 kubelet[2425]: W0625 16:34:07.579357 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.579418 kubelet[2425]: E0625 16:34:07.579364 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.579530 kubelet[2425]: E0625 16:34:07.579503 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.579530 kubelet[2425]: W0625 16:34:07.579508 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.579530 kubelet[2425]: E0625 16:34:07.579515 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.589198 kubelet[2425]: E0625 16:34:07.580451 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:07.589198 kubelet[2425]: W0625 16:34:07.580457 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:07.589198 kubelet[2425]: E0625 16:34:07.580466 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:07.589263 containerd[1344]: time="2024-06-25T16:34:07.581895167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:07.589263 containerd[1344]: time="2024-06-25T16:34:07.581930456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:07.589263 containerd[1344]: time="2024-06-25T16:34:07.581943208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:07.589263 containerd[1344]: time="2024-06-25T16:34:07.581952098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:07.657846 systemd[1]: Started cri-containerd-232ee802bb35124f68f01e6c5c5d3d126edb79a77445a508ec54d5faa7b26f5d.scope - libcontainer container 232ee802bb35124f68f01e6c5c5d3d126edb79a77445a508ec54d5faa7b26f5d. Jun 25 16:34:07.665000 audit: BPF prog-id=111 op=LOAD Jun 25 16:34:07.665000 audit: BPF prog-id=112 op=LOAD Jun 25 16:34:07.665000 audit[2844]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2828 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326565383032626233353132346636386630316536633563356433 Jun 25 16:34:07.665000 audit: BPF prog-id=113 op=LOAD Jun 25 16:34:07.665000 audit[2844]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2828 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326565383032626233353132346636386630316536633563356433 Jun 25 16:34:07.665000 audit: BPF prog-id=113 op=UNLOAD Jun 25 16:34:07.665000 audit: BPF prog-id=112 op=UNLOAD Jun 25 16:34:07.665000 audit: BPF prog-id=114 op=LOAD Jun 25 16:34:07.665000 audit[2844]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2828 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233326565383032626233353132346636386630316536633563356433 Jun 25 16:34:07.688840 containerd[1344]: time="2024-06-25T16:34:07.688816055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c76bd779b-2v28w,Uid:27fee3a4-863f-430f-8132-19d12fe9a130,Namespace:calico-system,Attempt:0,} returns sandbox id \"232ee802bb35124f68f01e6c5c5d3d126edb79a77445a508ec54d5faa7b26f5d\"" Jun 25 16:34:07.725401 containerd[1344]: time="2024-06-25T16:34:07.725374669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:34:07.897239 containerd[1344]: time="2024-06-25T16:34:07.897173892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:07.897415 containerd[1344]: time="2024-06-25T16:34:07.897236977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:07.897415 containerd[1344]: time="2024-06-25T16:34:07.897256046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:07.897415 containerd[1344]: time="2024-06-25T16:34:07.897271470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:07.910817 systemd[1]: Started cri-containerd-48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723.scope - libcontainer container 48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723. Jun 25 16:34:07.916000 audit: BPF prog-id=115 op=LOAD Jun 25 16:34:07.916000 audit: BPF prog-id=116 op=LOAD Jun 25 16:34:07.916000 audit[2884]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=2874 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438613837656338383535333331353331303539323634336161623964 Jun 25 16:34:07.916000 audit: BPF prog-id=117 op=LOAD Jun 25 16:34:07.916000 audit[2884]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=2874 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438613837656338383535333331353331303539323634336161623964 Jun 25 16:34:07.916000 audit: BPF prog-id=117 op=UNLOAD Jun 25 16:34:07.916000 audit: BPF prog-id=116 op=UNLOAD Jun 25 16:34:07.916000 audit: BPF prog-id=118 op=LOAD Jun 25 16:34:07.916000 audit[2884]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013bbe0 a2=78 a3=0 items=0 ppid=2874 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438613837656338383535333331353331303539323634336161623964 Jun 25 16:34:07.925177 containerd[1344]: time="2024-06-25T16:34:07.925150874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p67fj,Uid:84184d69-5834-4bc0-a405-94e852ba73a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723\"" Jun 25 16:34:08.118000 audit[2909]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:08.118000 audit[2909]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd7313bc20 a2=0 a3=7ffd7313bc0c items=0 ppid=2604 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:08.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:08.118000 audit[2909]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:08.118000 audit[2909]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7313bc20 a2=0 a3=0 items=0 ppid=2604 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:08.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:08.484336 kubelet[2425]: E0625 16:34:08.484215 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6zw6" podUID="8da5006e-233c-4549-81b3-ec063a911736" Jun 25 16:34:09.408348 containerd[1344]: time="2024-06-25T16:34:09.408320078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:09.408986 containerd[1344]: time="2024-06-25T16:34:09.408955238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:34:09.409215 containerd[1344]: time="2024-06-25T16:34:09.409201125Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:09.410191 containerd[1344]: time="2024-06-25T16:34:09.410176897Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:09.411033 containerd[1344]: time="2024-06-25T16:34:09.411017206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:09.411773 containerd[1344]: time="2024-06-25T16:34:09.411756381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 1.686248126s" Jun 25 16:34:09.411804 containerd[1344]: time="2024-06-25T16:34:09.411776061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:34:09.414012 containerd[1344]: time="2024-06-25T16:34:09.413311010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:34:09.419461 containerd[1344]: time="2024-06-25T16:34:09.419432645Z" level=info msg="CreateContainer within sandbox \"232ee802bb35124f68f01e6c5c5d3d126edb79a77445a508ec54d5faa7b26f5d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:34:09.425188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976426993.mount: Deactivated successfully. Jun 25 16:34:09.439601 containerd[1344]: time="2024-06-25T16:34:09.439573617Z" level=info msg="CreateContainer within sandbox \"232ee802bb35124f68f01e6c5c5d3d126edb79a77445a508ec54d5faa7b26f5d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"691a655a8c9525019a4dda22ef3dcde673c12e09c5240aa7464958e7caa00b68\"" Jun 25 16:34:09.440676 containerd[1344]: time="2024-06-25T16:34:09.440064149Z" level=info msg="StartContainer for \"691a655a8c9525019a4dda22ef3dcde673c12e09c5240aa7464958e7caa00b68\"" Jun 25 16:34:09.460820 systemd[1]: Started cri-containerd-691a655a8c9525019a4dda22ef3dcde673c12e09c5240aa7464958e7caa00b68.scope - libcontainer container 691a655a8c9525019a4dda22ef3dcde673c12e09c5240aa7464958e7caa00b68. Jun 25 16:34:09.466000 audit: BPF prog-id=119 op=LOAD Jun 25 16:34:09.466000 audit: BPF prog-id=120 op=LOAD Jun 25 16:34:09.466000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2828 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:09.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639316136353561386339353235303139613464646132326566336463 Jun 25 16:34:09.466000 audit: BPF prog-id=121 op=LOAD Jun 25 16:34:09.466000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2828 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:09.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639316136353561386339353235303139613464646132326566336463 Jun 25 16:34:09.466000 audit: BPF prog-id=121 op=UNLOAD Jun 25 16:34:09.466000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:34:09.466000 audit: BPF prog-id=122 op=LOAD Jun 25 16:34:09.466000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2828 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:09.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639316136353561386339353235303139613464646132326566336463 Jun 25 16:34:09.529929 containerd[1344]: time="2024-06-25T16:34:09.529905392Z" level=info msg="StartContainer for \"691a655a8c9525019a4dda22ef3dcde673c12e09c5240aa7464958e7caa00b68\" returns successfully" Jun 25 16:34:09.551097 kubelet[2425]: I0625 16:34:09.550849 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5c76bd779b-2v28w" podStartSLOduration=0.828407473 podCreationTimestamp="2024-06-25 16:34:07 +0000 UTC" firstStartedPulling="2024-06-25 16:34:07.689581343 +0000 UTC m=+19.358009379" lastFinishedPulling="2024-06-25 16:34:09.411995397 +0000 UTC m=+21.080423441" observedRunningTime="2024-06-25 16:34:09.550332678 +0000 UTC m=+21.218760722" watchObservedRunningTime="2024-06-25 16:34:09.550821535 +0000 UTC m=+21.219249570" Jun 25 16:34:09.577955 kubelet[2425]: E0625 16:34:09.577639 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.577955 kubelet[2425]: W0625 16:34:09.577660 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.577955 kubelet[2425]: E0625 16:34:09.577675 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.577955 kubelet[2425]: E0625 16:34:09.577789 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.577955 kubelet[2425]: W0625 16:34:09.577793 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.577955 kubelet[2425]: E0625 16:34:09.577800 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.577955 kubelet[2425]: E0625 16:34:09.577893 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.577955 kubelet[2425]: W0625 16:34:09.577897 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.577955 kubelet[2425]: E0625 16:34:09.577904 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.578287 kubelet[2425]: E0625 16:34:09.578217 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.578287 kubelet[2425]: W0625 16:34:09.578225 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.578287 kubelet[2425]: E0625 16:34:09.578231 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.578437 kubelet[2425]: E0625 16:34:09.578387 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.578437 kubelet[2425]: W0625 16:34:09.578392 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.578437 kubelet[2425]: E0625 16:34:09.578399 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.578577 kubelet[2425]: E0625 16:34:09.578527 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.578577 kubelet[2425]: W0625 16:34:09.578532 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.578577 kubelet[2425]: E0625 16:34:09.578538 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.578716 kubelet[2425]: E0625 16:34:09.578668 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.578716 kubelet[2425]: W0625 16:34:09.578673 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.578716 kubelet[2425]: E0625 16:34:09.578680 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.578877 kubelet[2425]: E0625 16:34:09.578814 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.578877 kubelet[2425]: W0625 16:34:09.578820 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.578877 kubelet[2425]: E0625 16:34:09.578826 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.579047 kubelet[2425]: E0625 16:34:09.578999 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.579047 kubelet[2425]: W0625 16:34:09.579004 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.579047 kubelet[2425]: E0625 16:34:09.579010 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.579182 kubelet[2425]: E0625 16:34:09.579135 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.579182 kubelet[2425]: W0625 16:34:09.579140 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.579182 kubelet[2425]: E0625 16:34:09.579147 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.579369 kubelet[2425]: E0625 16:34:09.579320 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.579369 kubelet[2425]: W0625 16:34:09.579325 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.579369 kubelet[2425]: E0625 16:34:09.579331 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.579520 kubelet[2425]: E0625 16:34:09.579461 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.579520 kubelet[2425]: W0625 16:34:09.579466 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.579520 kubelet[2425]: E0625 16:34:09.579472 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.579662 kubelet[2425]: E0625 16:34:09.579612 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.579662 kubelet[2425]: W0625 16:34:09.579617 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.579662 kubelet[2425]: E0625 16:34:09.579623 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.579835 kubelet[2425]: E0625 16:34:09.579784 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.579835 kubelet[2425]: W0625 16:34:09.579789 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.579835 kubelet[2425]: E0625 16:34:09.579795 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.580050 kubelet[2425]: E0625 16:34:09.579925 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.580050 kubelet[2425]: W0625 16:34:09.579930 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.580050 kubelet[2425]: E0625 16:34:09.579936 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.580221 kubelet[2425]: E0625 16:34:09.580147 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.580221 kubelet[2425]: W0625 16:34:09.580152 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.580221 kubelet[2425]: E0625 16:34:09.580158 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.580648 kubelet[2425]: E0625 16:34:09.580354 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.580648 kubelet[2425]: W0625 16:34:09.580359 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.580648 kubelet[2425]: E0625 16:34:09.580368 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.580648 kubelet[2425]: E0625 16:34:09.580521 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.580648 kubelet[2425]: W0625 16:34:09.580526 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.580648 kubelet[2425]: E0625 16:34:09.580535 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.580876 kubelet[2425]: E0625 16:34:09.580811 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.580876 kubelet[2425]: W0625 16:34:09.580817 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.580876 kubelet[2425]: E0625 16:34:09.580825 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581033 kubelet[2425]: E0625 16:34:09.580995 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581033 kubelet[2425]: W0625 16:34:09.581000 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581033 kubelet[2425]: E0625 16:34:09.581024 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581572 kubelet[2425]: E0625 16:34:09.581120 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581572 kubelet[2425]: W0625 16:34:09.581128 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581572 kubelet[2425]: E0625 16:34:09.581141 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581572 kubelet[2425]: E0625 16:34:09.581222 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581572 kubelet[2425]: W0625 16:34:09.581226 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581572 kubelet[2425]: E0625 16:34:09.581233 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581572 kubelet[2425]: E0625 16:34:09.581309 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581572 kubelet[2425]: W0625 16:34:09.581313 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581572 kubelet[2425]: E0625 16:34:09.581320 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581572 kubelet[2425]: E0625 16:34:09.581398 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581819 kubelet[2425]: W0625 16:34:09.581402 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581819 kubelet[2425]: E0625 16:34:09.581409 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581819 kubelet[2425]: E0625 16:34:09.581642 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581819 kubelet[2425]: W0625 16:34:09.581647 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581819 kubelet[2425]: E0625 16:34:09.581653 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581819 kubelet[2425]: E0625 16:34:09.581747 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581819 kubelet[2425]: W0625 16:34:09.581751 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581819 kubelet[2425]: E0625 16:34:09.581761 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581957 kubelet[2425]: E0625 16:34:09.581831 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581957 kubelet[2425]: W0625 16:34:09.581835 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581957 kubelet[2425]: E0625 16:34:09.581841 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.581957 kubelet[2425]: E0625 16:34:09.581907 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.581957 kubelet[2425]: W0625 16:34:09.581911 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.581957 kubelet[2425]: E0625 16:34:09.581917 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.582057 kubelet[2425]: E0625 16:34:09.581993 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.582057 kubelet[2425]: W0625 16:34:09.581997 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.582057 kubelet[2425]: E0625 16:34:09.582002 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.582439 kubelet[2425]: E0625 16:34:09.582152 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.582439 kubelet[2425]: W0625 16:34:09.582159 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.582439 kubelet[2425]: E0625 16:34:09.582167 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.582439 kubelet[2425]: E0625 16:34:09.582239 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.582439 kubelet[2425]: W0625 16:34:09.582243 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.582439 kubelet[2425]: E0625 16:34:09.582249 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.582439 kubelet[2425]: E0625 16:34:09.582335 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.582439 kubelet[2425]: W0625 16:34:09.582339 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.582439 kubelet[2425]: E0625 16:34:09.582345 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:09.582708 kubelet[2425]: E0625 16:34:09.582689 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:09.582708 kubelet[2425]: W0625 16:34:09.582695 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:09.582708 kubelet[2425]: E0625 16:34:09.582703 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.479237 kubelet[2425]: E0625 16:34:10.479051 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6zw6" podUID="8da5006e-233c-4549-81b3-ec063a911736" Jun 25 16:34:10.551591 kubelet[2425]: I0625 16:34:10.551251 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:34:10.586790 kubelet[2425]: E0625 16:34:10.586305 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.586790 kubelet[2425]: W0625 16:34:10.586318 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.586790 kubelet[2425]: E0625 16:34:10.586335 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.587211 kubelet[2425]: E0625 16:34:10.587148 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.587211 kubelet[2425]: W0625 16:34:10.587156 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.587211 kubelet[2425]: E0625 16:34:10.587166 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.587367 kubelet[2425]: E0625 16:34:10.587313 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.587367 kubelet[2425]: W0625 16:34:10.587318 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.587367 kubelet[2425]: E0625 16:34:10.587325 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.587527 kubelet[2425]: E0625 16:34:10.587464 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.587527 kubelet[2425]: W0625 16:34:10.587470 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.587527 kubelet[2425]: E0625 16:34:10.587477 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.587687 kubelet[2425]: E0625 16:34:10.587637 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.587687 kubelet[2425]: W0625 16:34:10.587643 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.587687 kubelet[2425]: E0625 16:34:10.587649 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.587848 kubelet[2425]: E0625 16:34:10.587799 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.587848 kubelet[2425]: W0625 16:34:10.587804 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.587848 kubelet[2425]: E0625 16:34:10.587810 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.587995 kubelet[2425]: E0625 16:34:10.587945 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.587995 kubelet[2425]: W0625 16:34:10.587951 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.587995 kubelet[2425]: E0625 16:34:10.587957 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.588164 kubelet[2425]: E0625 16:34:10.588101 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.588164 kubelet[2425]: W0625 16:34:10.588106 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.588164 kubelet[2425]: E0625 16:34:10.588113 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.589064 kubelet[2425]: E0625 16:34:10.588334 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.589064 kubelet[2425]: W0625 16:34:10.588340 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.589064 kubelet[2425]: E0625 16:34:10.588346 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.589064 kubelet[2425]: E0625 16:34:10.588710 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.589064 kubelet[2425]: W0625 16:34:10.588715 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.589064 kubelet[2425]: E0625 16:34:10.588723 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.589064 kubelet[2425]: E0625 16:34:10.588815 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.589064 kubelet[2425]: W0625 16:34:10.588820 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.589064 kubelet[2425]: E0625 16:34:10.588826 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.589064 kubelet[2425]: E0625 16:34:10.588899 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.589324 kubelet[2425]: W0625 16:34:10.588903 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.589324 kubelet[2425]: E0625 16:34:10.588909 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.589324 kubelet[2425]: E0625 16:34:10.588999 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.589324 kubelet[2425]: W0625 16:34:10.589004 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.589324 kubelet[2425]: E0625 16:34:10.589010 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.589324 kubelet[2425]: E0625 16:34:10.589085 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.589324 kubelet[2425]: W0625 16:34:10.589089 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.589324 kubelet[2425]: E0625 16:34:10.589095 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.589324 kubelet[2425]: E0625 16:34:10.589164 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.589324 kubelet[2425]: W0625 16:34:10.589169 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.589498 kubelet[2425]: E0625 16:34:10.589174 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.616631 containerd[1344]: time="2024-06-25T16:34:10.616603501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:10.617144 containerd[1344]: time="2024-06-25T16:34:10.617108189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:34:10.617385 containerd[1344]: time="2024-06-25T16:34:10.617372156Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:10.618213 containerd[1344]: time="2024-06-25T16:34:10.618198341Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:10.621854 containerd[1344]: time="2024-06-25T16:34:10.621835921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:10.622334 containerd[1344]: time="2024-06-25T16:34:10.622315823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.208985197s" Jun 25 16:34:10.622389 containerd[1344]: time="2024-06-25T16:34:10.622376428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:34:10.624147 containerd[1344]: time="2024-06-25T16:34:10.624105122Z" level=info msg="CreateContainer within sandbox \"48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:34:10.660472 containerd[1344]: time="2024-06-25T16:34:10.660448870Z" level=info msg="CreateContainer within sandbox \"48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf\"" Jun 25 16:34:10.661715 containerd[1344]: time="2024-06-25T16:34:10.661701161Z" level=info msg="StartContainer for \"5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf\"" Jun 25 16:34:10.683769 systemd[1]: run-containerd-runc-k8s.io-5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf-runc.DWdgCd.mount: Deactivated successfully. Jun 25 16:34:10.688443 kubelet[2425]: E0625 16:34:10.685825 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688443 kubelet[2425]: W0625 16:34:10.685837 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688443 kubelet[2425]: E0625 16:34:10.685851 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688443 kubelet[2425]: E0625 16:34:10.685948 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688443 kubelet[2425]: W0625 16:34:10.685954 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688443 kubelet[2425]: E0625 16:34:10.685960 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688443 kubelet[2425]: E0625 16:34:10.686051 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688443 kubelet[2425]: W0625 16:34:10.686055 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688443 kubelet[2425]: E0625 16:34:10.686061 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688443 kubelet[2425]: E0625 16:34:10.686157 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688680 kubelet[2425]: W0625 16:34:10.686161 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688680 kubelet[2425]: E0625 16:34:10.686168 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688680 kubelet[2425]: E0625 16:34:10.686253 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688680 kubelet[2425]: W0625 16:34:10.686257 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688680 kubelet[2425]: E0625 16:34:10.686263 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688680 kubelet[2425]: E0625 16:34:10.686343 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688680 kubelet[2425]: W0625 16:34:10.686347 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688680 kubelet[2425]: E0625 16:34:10.686352 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688680 kubelet[2425]: E0625 16:34:10.686441 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688680 kubelet[2425]: W0625 16:34:10.686445 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688893 kubelet[2425]: E0625 16:34:10.686451 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688893 kubelet[2425]: E0625 16:34:10.686701 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688893 kubelet[2425]: W0625 16:34:10.686706 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688893 kubelet[2425]: E0625 16:34:10.686713 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688893 kubelet[2425]: E0625 16:34:10.686813 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688893 kubelet[2425]: W0625 16:34:10.686818 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688893 kubelet[2425]: E0625 16:34:10.686824 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.688893 kubelet[2425]: E0625 16:34:10.686903 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.688893 kubelet[2425]: W0625 16:34:10.686907 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.688893 kubelet[2425]: E0625 16:34:10.686913 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689069 kubelet[2425]: E0625 16:34:10.686989 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689069 kubelet[2425]: W0625 16:34:10.686993 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689069 kubelet[2425]: E0625 16:34:10.686999 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689069 kubelet[2425]: E0625 16:34:10.687082 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689069 kubelet[2425]: W0625 16:34:10.687086 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689069 kubelet[2425]: E0625 16:34:10.687092 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689069 kubelet[2425]: E0625 16:34:10.687301 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689069 kubelet[2425]: W0625 16:34:10.687305 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689069 kubelet[2425]: E0625 16:34:10.687310 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689069 kubelet[2425]: E0625 16:34:10.687396 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689280 kubelet[2425]: W0625 16:34:10.687401 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689280 kubelet[2425]: E0625 16:34:10.687407 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689280 kubelet[2425]: E0625 16:34:10.687486 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689280 kubelet[2425]: W0625 16:34:10.687490 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689280 kubelet[2425]: E0625 16:34:10.687497 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689280 kubelet[2425]: E0625 16:34:10.687573 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689280 kubelet[2425]: W0625 16:34:10.687577 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689280 kubelet[2425]: E0625 16:34:10.687582 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689280 kubelet[2425]: E0625 16:34:10.687668 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689280 kubelet[2425]: W0625 16:34:10.687672 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689493 kubelet[2425]: E0625 16:34:10.687678 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689493 kubelet[2425]: E0625 16:34:10.688070 2425 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:34:10.689493 kubelet[2425]: W0625 16:34:10.688074 2425 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:34:10.689493 kubelet[2425]: E0625 16:34:10.688081 2425 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:34:10.689834 systemd[1]: Started cri-containerd-5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf.scope - libcontainer container 5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf. Jun 25 16:34:10.698000 audit: BPF prog-id=123 op=LOAD Jun 25 16:34:10.698000 audit[3016]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2874 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:10.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564333234333730303635373765613065643833363834626336616639 Jun 25 16:34:10.698000 audit: BPF prog-id=124 op=LOAD Jun 25 16:34:10.698000 audit[3016]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2874 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:10.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564333234333730303635373765613065643833363834626336616639 Jun 25 16:34:10.698000 audit: BPF prog-id=124 op=UNLOAD Jun 25 16:34:10.698000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:34:10.698000 audit: BPF prog-id=125 op=LOAD Jun 25 16:34:10.698000 audit[3016]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2874 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:10.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564333234333730303635373765613065643833363834626336616639 Jun 25 16:34:10.716873 systemd[1]: cri-containerd-5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf.scope: Deactivated successfully. Jun 25 16:34:10.720000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:34:10.726933 containerd[1344]: time="2024-06-25T16:34:10.726906562Z" level=info msg="StartContainer for \"5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf\" returns successfully" Jun 25 16:34:10.981746 containerd[1344]: time="2024-06-25T16:34:10.953307573Z" level=info msg="shim disconnected" id=5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf namespace=k8s.io Jun 25 16:34:10.981746 containerd[1344]: time="2024-06-25T16:34:10.981568222Z" level=warning msg="cleaning up after shim disconnected" id=5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf namespace=k8s.io Jun 25 16:34:10.981746 containerd[1344]: time="2024-06-25T16:34:10.981580317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:34:11.546814 containerd[1344]: time="2024-06-25T16:34:11.546793263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:34:11.654143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d32437006577ea0ed83684bc6af90f73d9751f152104be4d8e341fab92e84bf-rootfs.mount: Deactivated successfully. Jun 25 16:34:12.480193 kubelet[2425]: E0625 16:34:12.479360 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6zw6" podUID="8da5006e-233c-4549-81b3-ec063a911736" Jun 25 16:34:14.480053 kubelet[2425]: E0625 16:34:14.480006 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6zw6" podUID="8da5006e-233c-4549-81b3-ec063a911736" Jun 25 16:34:14.585470 containerd[1344]: time="2024-06-25T16:34:14.585435698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:14.585964 containerd[1344]: time="2024-06-25T16:34:14.585932538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:34:14.586388 containerd[1344]: time="2024-06-25T16:34:14.586355773Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:14.587508 containerd[1344]: time="2024-06-25T16:34:14.587490277Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:14.588617 containerd[1344]: time="2024-06-25T16:34:14.588599136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:14.589260 containerd[1344]: time="2024-06-25T16:34:14.589238702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 3.042306328s" Jun 25 16:34:14.589307 containerd[1344]: time="2024-06-25T16:34:14.589260343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:34:14.591279 containerd[1344]: time="2024-06-25T16:34:14.591262528Z" level=info msg="CreateContainer within sandbox \"48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:34:14.605553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687200249.mount: Deactivated successfully. Jun 25 16:34:14.607691 containerd[1344]: time="2024-06-25T16:34:14.607673439Z" level=info msg="CreateContainer within sandbox \"48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712\"" Jun 25 16:34:14.608072 containerd[1344]: time="2024-06-25T16:34:14.608003905Z" level=info msg="StartContainer for \"dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712\"" Jun 25 16:34:14.647878 systemd[1]: Started cri-containerd-dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712.scope - libcontainer container dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712. Jun 25 16:34:14.650628 systemd[1]: run-containerd-runc-k8s.io-dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712-runc.yslL54.mount: Deactivated successfully. Jun 25 16:34:14.668000 audit: BPF prog-id=126 op=LOAD Jun 25 16:34:14.670077 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 16:34:14.670109 kernel: audit: type=1334 audit(1719333254.668:485): prog-id=126 op=LOAD Jun 25 16:34:14.670124 kernel: audit: type=1300 audit(1719333254.668:485): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2874 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:14.668000 audit[3106]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2874 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:14.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623536366461353536633330656265343130636338313930313131 Jun 25 16:34:14.674039 kernel: audit: type=1327 audit(1719333254.668:485): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623536366461353536633330656265343130636338313930313131 Jun 25 16:34:14.676896 kernel: audit: type=1334 audit(1719333254.668:486): prog-id=127 op=LOAD Jun 25 16:34:14.676917 kernel: audit: type=1300 audit(1719333254.668:486): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2874 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:14.668000 audit: BPF prog-id=127 op=LOAD Jun 25 16:34:14.668000 audit[3106]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2874 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:14.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623536366461353536633330656265343130636338313930313131 Jun 25 16:34:14.679471 kernel: audit: type=1327 audit(1719333254.668:486): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623536366461353536633330656265343130636338313930313131 Jun 25 16:34:14.668000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:34:14.680131 kernel: audit: type=1334 audit(1719333254.668:487): prog-id=127 op=UNLOAD Jun 25 16:34:14.680159 kernel: audit: type=1334 audit(1719333254.668:488): prog-id=126 op=UNLOAD Jun 25 16:34:14.668000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:34:14.668000 audit: BPF prog-id=128 op=LOAD Jun 25 16:34:14.681142 kernel: audit: type=1334 audit(1719333254.668:489): prog-id=128 op=LOAD Jun 25 16:34:14.681171 kernel: audit: type=1300 audit(1719333254.668:489): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2874 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:14.668000 audit[3106]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2874 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:14.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623536366461353536633330656265343130636338313930313131 Jun 25 16:34:14.697466 containerd[1344]: time="2024-06-25T16:34:14.697443361Z" level=info msg="StartContainer for \"dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712\" returns successfully" Jun 25 16:34:16.220283 systemd[1]: cri-containerd-dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712.scope: Deactivated successfully. Jun 25 16:34:16.225000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:34:16.241215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712-rootfs.mount: Deactivated successfully. Jun 25 16:34:16.242242 containerd[1344]: time="2024-06-25T16:34:16.242208287Z" level=info msg="shim disconnected" id=dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712 namespace=k8s.io Jun 25 16:34:16.242396 containerd[1344]: time="2024-06-25T16:34:16.242240499Z" level=warning msg="cleaning up after shim disconnected" id=dcb566da556c30ebe410cc81901112fa9e169d8612be33db4e9863ea46924712 namespace=k8s.io Jun 25 16:34:16.242396 containerd[1344]: time="2024-06-25T16:34:16.242247469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:34:16.282149 kubelet[2425]: I0625 16:34:16.282065 2425 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:34:16.312014 kubelet[2425]: I0625 16:34:16.311994 2425 topology_manager.go:215] "Topology Admit Handler" podUID="b19c0974-d630-40fb-96d8-9884cf02a803" podNamespace="kube-system" podName="coredns-5dd5756b68-g4hfm" Jun 25 16:34:16.322882 kubelet[2425]: I0625 16:34:16.322863 2425 topology_manager.go:215] "Topology Admit Handler" podUID="12e884cf-b0a0-408e-9cda-1efd4981caa7" podNamespace="calico-system" podName="calico-kube-controllers-678cc79c9f-99468" Jun 25 16:34:16.323053 kubelet[2425]: I0625 16:34:16.323045 2425 topology_manager.go:215] "Topology Admit Handler" podUID="d80999f6-e8ab-41e9-805a-94aad5dbb65d" podNamespace="kube-system" podName="coredns-5dd5756b68-glphj" Jun 25 16:34:16.351712 systemd[1]: Created slice kubepods-burstable-podb19c0974_d630_40fb_96d8_9884cf02a803.slice - libcontainer container kubepods-burstable-podb19c0974_d630_40fb_96d8_9884cf02a803.slice. Jun 25 16:34:16.353039 systemd[1]: Created slice kubepods-burstable-podd80999f6_e8ab_41e9_805a_94aad5dbb65d.slice - libcontainer container kubepods-burstable-podd80999f6_e8ab_41e9_805a_94aad5dbb65d.slice. Jun 25 16:34:16.354234 systemd[1]: Created slice kubepods-besteffort-pod12e884cf_b0a0_408e_9cda_1efd4981caa7.slice - libcontainer container kubepods-besteffort-pod12e884cf_b0a0_408e_9cda_1efd4981caa7.slice. Jun 25 16:34:16.420807 kubelet[2425]: I0625 16:34:16.420785 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d80999f6-e8ab-41e9-805a-94aad5dbb65d-config-volume\") pod \"coredns-5dd5756b68-glphj\" (UID: \"d80999f6-e8ab-41e9-805a-94aad5dbb65d\") " pod="kube-system/coredns-5dd5756b68-glphj" Jun 25 16:34:16.420894 kubelet[2425]: I0625 16:34:16.420834 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b19c0974-d630-40fb-96d8-9884cf02a803-config-volume\") pod \"coredns-5dd5756b68-g4hfm\" (UID: \"b19c0974-d630-40fb-96d8-9884cf02a803\") " pod="kube-system/coredns-5dd5756b68-g4hfm" Jun 25 16:34:16.420894 kubelet[2425]: I0625 16:34:16.420857 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12e884cf-b0a0-408e-9cda-1efd4981caa7-tigera-ca-bundle\") pod \"calico-kube-controllers-678cc79c9f-99468\" (UID: \"12e884cf-b0a0-408e-9cda-1efd4981caa7\") " pod="calico-system/calico-kube-controllers-678cc79c9f-99468" Jun 25 16:34:16.420894 kubelet[2425]: I0625 16:34:16.420876 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblv8\" (UniqueName: \"kubernetes.io/projected/d80999f6-e8ab-41e9-805a-94aad5dbb65d-kube-api-access-dblv8\") pod \"coredns-5dd5756b68-glphj\" (UID: \"d80999f6-e8ab-41e9-805a-94aad5dbb65d\") " pod="kube-system/coredns-5dd5756b68-glphj" Jun 25 16:34:16.420976 kubelet[2425]: I0625 16:34:16.420907 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2g5\" (UniqueName: \"kubernetes.io/projected/b19c0974-d630-40fb-96d8-9884cf02a803-kube-api-access-dq2g5\") pod \"coredns-5dd5756b68-g4hfm\" (UID: \"b19c0974-d630-40fb-96d8-9884cf02a803\") " pod="kube-system/coredns-5dd5756b68-g4hfm" Jun 25 16:34:16.420976 kubelet[2425]: I0625 16:34:16.420927 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25zx\" (UniqueName: \"kubernetes.io/projected/12e884cf-b0a0-408e-9cda-1efd4981caa7-kube-api-access-d25zx\") pod \"calico-kube-controllers-678cc79c9f-99468\" (UID: \"12e884cf-b0a0-408e-9cda-1efd4981caa7\") " pod="calico-system/calico-kube-controllers-678cc79c9f-99468" Jun 25 16:34:16.483237 systemd[1]: Created slice kubepods-besteffort-pod8da5006e_233c_4549_81b3_ec063a911736.slice - libcontainer container kubepods-besteffort-pod8da5006e_233c_4549_81b3_ec063a911736.slice. Jun 25 16:34:16.485488 containerd[1344]: time="2024-06-25T16:34:16.485372359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6zw6,Uid:8da5006e-233c-4549-81b3-ec063a911736,Namespace:calico-system,Attempt:0,}" Jun 25 16:34:16.555964 containerd[1344]: time="2024-06-25T16:34:16.555760350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:34:16.619235 containerd[1344]: time="2024-06-25T16:34:16.619194787Z" level=error msg="Failed to destroy network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.619532 containerd[1344]: time="2024-06-25T16:34:16.619516599Z" level=error msg="encountered an error cleaning up failed sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.619608 containerd[1344]: time="2024-06-25T16:34:16.619592249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6zw6,Uid:8da5006e-233c-4549-81b3-ec063a911736,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.628742 kubelet[2425]: E0625 16:34:16.627755 2425 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.628742 kubelet[2425]: E0625 16:34:16.628588 2425 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6zw6" Jun 25 16:34:16.628742 kubelet[2425]: E0625 16:34:16.628607 2425 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6zw6" Jun 25 16:34:16.628886 kubelet[2425]: E0625 16:34:16.628659 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6zw6_calico-system(8da5006e-233c-4549-81b3-ec063a911736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6zw6_calico-system(8da5006e-233c-4549-81b3-ec063a911736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6zw6" podUID="8da5006e-233c-4549-81b3-ec063a911736" Jun 25 16:34:16.661381 containerd[1344]: time="2024-06-25T16:34:16.661352216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-glphj,Uid:d80999f6-e8ab-41e9-805a-94aad5dbb65d,Namespace:kube-system,Attempt:0,}" Jun 25 16:34:16.661497 containerd[1344]: time="2024-06-25T16:34:16.661483689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678cc79c9f-99468,Uid:12e884cf-b0a0-408e-9cda-1efd4981caa7,Namespace:calico-system,Attempt:0,}" Jun 25 16:34:16.661647 containerd[1344]: time="2024-06-25T16:34:16.661352221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g4hfm,Uid:b19c0974-d630-40fb-96d8-9884cf02a803,Namespace:kube-system,Attempt:0,}" Jun 25 16:34:16.763143 containerd[1344]: time="2024-06-25T16:34:16.760436735Z" level=error msg="Failed to destroy network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.763143 containerd[1344]: time="2024-06-25T16:34:16.760810193Z" level=error msg="encountered an error cleaning up failed sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.763143 containerd[1344]: time="2024-06-25T16:34:16.760847223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-glphj,Uid:d80999f6-e8ab-41e9-805a-94aad5dbb65d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.763326 kubelet[2425]: E0625 16:34:16.762830 2425 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.763326 kubelet[2425]: E0625 16:34:16.762864 2425 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-glphj" Jun 25 16:34:16.763326 kubelet[2425]: E0625 16:34:16.762877 2425 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-glphj" Jun 25 16:34:16.763403 kubelet[2425]: E0625 16:34:16.762910 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-glphj_kube-system(d80999f6-e8ab-41e9-805a-94aad5dbb65d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-glphj_kube-system(d80999f6-e8ab-41e9-805a-94aad5dbb65d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-glphj" podUID="d80999f6-e8ab-41e9-805a-94aad5dbb65d" Jun 25 16:34:16.779275 containerd[1344]: time="2024-06-25T16:34:16.779240546Z" level=error msg="Failed to destroy network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.779617 containerd[1344]: time="2024-06-25T16:34:16.779600368Z" level=error msg="encountered an error cleaning up failed sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.779694 containerd[1344]: time="2024-06-25T16:34:16.779679374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g4hfm,Uid:b19c0974-d630-40fb-96d8-9884cf02a803,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.780073 kubelet[2425]: E0625 16:34:16.779864 2425 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.780073 kubelet[2425]: E0625 16:34:16.779897 2425 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-g4hfm" Jun 25 16:34:16.780073 kubelet[2425]: E0625 16:34:16.779913 2425 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-g4hfm" Jun 25 16:34:16.780161 kubelet[2425]: E0625 16:34:16.779948 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-g4hfm_kube-system(b19c0974-d630-40fb-96d8-9884cf02a803)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-g4hfm_kube-system(b19c0974-d630-40fb-96d8-9884cf02a803)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-g4hfm" podUID="b19c0974-d630-40fb-96d8-9884cf02a803" Jun 25 16:34:16.784688 containerd[1344]: time="2024-06-25T16:34:16.780896499Z" level=error msg="Failed to destroy network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.784688 containerd[1344]: time="2024-06-25T16:34:16.784329629Z" level=error msg="encountered an error cleaning up failed sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.784688 containerd[1344]: time="2024-06-25T16:34:16.784366953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678cc79c9f-99468,Uid:12e884cf-b0a0-408e-9cda-1efd4981caa7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.784842 kubelet[2425]: E0625 16:34:16.784484 2425 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:16.784842 kubelet[2425]: E0625 16:34:16.784521 2425 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-678cc79c9f-99468" Jun 25 16:34:16.784842 kubelet[2425]: E0625 16:34:16.784535 2425 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-678cc79c9f-99468" Jun 25 16:34:16.784932 kubelet[2425]: E0625 16:34:16.784562 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-678cc79c9f-99468_calico-system(12e884cf-b0a0-408e-9cda-1efd4981caa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-678cc79c9f-99468_calico-system(12e884cf-b0a0-408e-9cda-1efd4981caa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-678cc79c9f-99468" podUID="12e884cf-b0a0-408e-9cda-1efd4981caa7" Jun 25 16:34:17.242984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6-shm.mount: Deactivated successfully. Jun 25 16:34:17.556247 kubelet[2425]: I0625 16:34:17.556175 2425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:17.558356 kubelet[2425]: I0625 16:34:17.558144 2425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:17.580594 kubelet[2425]: I0625 16:34:17.580577 2425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:17.581943 kubelet[2425]: I0625 16:34:17.581933 2425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:17.592704 containerd[1344]: time="2024-06-25T16:34:17.592517327Z" level=info msg="StopPodSandbox for \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\"" Jun 25 16:34:17.592704 containerd[1344]: time="2024-06-25T16:34:17.592586540Z" level=info msg="StopPodSandbox for \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\"" Jun 25 16:34:17.596493 containerd[1344]: time="2024-06-25T16:34:17.595932126Z" level=info msg="Ensure that sandbox cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd in task-service has been cleanup successfully" Jun 25 16:34:17.596625 containerd[1344]: time="2024-06-25T16:34:17.596609222Z" level=info msg="StopPodSandbox for \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\"" Jun 25 16:34:17.596801 containerd[1344]: time="2024-06-25T16:34:17.596787013Z" level=info msg="Ensure that sandbox 976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a in task-service has been cleanup successfully" Jun 25 16:34:17.596988 containerd[1344]: time="2024-06-25T16:34:17.596974550Z" level=info msg="Ensure that sandbox 7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6 in task-service has been cleanup successfully" Jun 25 16:34:17.597131 containerd[1344]: time="2024-06-25T16:34:17.592536001Z" level=info msg="StopPodSandbox for \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\"" Jun 25 16:34:17.597278 containerd[1344]: time="2024-06-25T16:34:17.597265635Z" level=info msg="Ensure that sandbox 6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52 in task-service has been cleanup successfully" Jun 25 16:34:17.626843 containerd[1344]: time="2024-06-25T16:34:17.626801871Z" level=error msg="StopPodSandbox for \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\" failed" error="failed to destroy network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:17.627062 kubelet[2425]: E0625 16:34:17.627037 2425 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:17.633108 kubelet[2425]: E0625 16:34:17.633094 2425 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a"} Jun 25 16:34:17.633150 kubelet[2425]: E0625 16:34:17.633122 2425 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d80999f6-e8ab-41e9-805a-94aad5dbb65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:34:17.633197 kubelet[2425]: E0625 16:34:17.633151 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d80999f6-e8ab-41e9-805a-94aad5dbb65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-glphj" podUID="d80999f6-e8ab-41e9-805a-94aad5dbb65d" Jun 25 16:34:17.637449 containerd[1344]: time="2024-06-25T16:34:17.637420219Z" level=error msg="StopPodSandbox for \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\" failed" error="failed to destroy network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:17.637542 kubelet[2425]: E0625 16:34:17.637532 2425 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:17.637574 kubelet[2425]: E0625 16:34:17.637551 2425 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd"} Jun 25 16:34:17.637574 kubelet[2425]: E0625 16:34:17.637572 2425 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12e884cf-b0a0-408e-9cda-1efd4981caa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:34:17.637630 kubelet[2425]: E0625 16:34:17.637588 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12e884cf-b0a0-408e-9cda-1efd4981caa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-678cc79c9f-99468" podUID="12e884cf-b0a0-408e-9cda-1efd4981caa7" Jun 25 16:34:17.638058 containerd[1344]: time="2024-06-25T16:34:17.638017147Z" level=error msg="StopPodSandbox for \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\" failed" error="failed to destroy network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:17.638176 kubelet[2425]: E0625 16:34:17.638162 2425 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:17.638207 kubelet[2425]: E0625 16:34:17.638178 2425 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6"} Jun 25 16:34:17.638207 kubelet[2425]: E0625 16:34:17.638201 2425 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8da5006e-233c-4549-81b3-ec063a911736\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:34:17.638261 kubelet[2425]: E0625 16:34:17.638215 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8da5006e-233c-4549-81b3-ec063a911736\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6zw6" podUID="8da5006e-233c-4549-81b3-ec063a911736" Jun 25 16:34:17.643429 containerd[1344]: time="2024-06-25T16:34:17.643409395Z" level=error msg="StopPodSandbox for \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\" failed" error="failed to destroy network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:34:17.643555 kubelet[2425]: E0625 16:34:17.643542 2425 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:17.643588 kubelet[2425]: E0625 16:34:17.643560 2425 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52"} Jun 25 16:34:17.643588 kubelet[2425]: E0625 16:34:17.643584 2425 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b19c0974-d630-40fb-96d8-9884cf02a803\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:34:17.643643 kubelet[2425]: E0625 16:34:17.643598 2425 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b19c0974-d630-40fb-96d8-9884cf02a803\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-g4hfm" podUID="b19c0974-d630-40fb-96d8-9884cf02a803" Jun 25 16:34:20.338040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408947194.mount: Deactivated successfully. Jun 25 16:34:20.484881 containerd[1344]: time="2024-06-25T16:34:20.473243881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:20.487341 containerd[1344]: time="2024-06-25T16:34:20.487111274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:34:20.500511 containerd[1344]: time="2024-06-25T16:34:20.500480757Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:20.501353 containerd[1344]: time="2024-06-25T16:34:20.501336296Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:20.502204 containerd[1344]: time="2024-06-25T16:34:20.502188589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:20.502683 containerd[1344]: time="2024-06-25T16:34:20.502664650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 3.946877053s" Jun 25 16:34:20.502757 containerd[1344]: time="2024-06-25T16:34:20.502743944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:34:20.528415 containerd[1344]: time="2024-06-25T16:34:20.528393826Z" level=info msg="CreateContainer within sandbox \"48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:34:20.543681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754253517.mount: Deactivated successfully. Jun 25 16:34:20.550278 containerd[1344]: time="2024-06-25T16:34:20.550257255Z" level=info msg="CreateContainer within sandbox \"48a87ec88553315310592643aab9d3339fc8579f21fa2db7ff9b8f6bb47bc723\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b6f0aefe61be1ca5b0ac2a5056a687fd5aaaf4e3201729e492e91e98ea2a0e03\"" Jun 25 16:34:20.555550 containerd[1344]: time="2024-06-25T16:34:20.555536654Z" level=info msg="StartContainer for \"b6f0aefe61be1ca5b0ac2a5056a687fd5aaaf4e3201729e492e91e98ea2a0e03\"" Jun 25 16:34:20.686811 systemd[1]: Started cri-containerd-b6f0aefe61be1ca5b0ac2a5056a687fd5aaaf4e3201729e492e91e98ea2a0e03.scope - libcontainer container b6f0aefe61be1ca5b0ac2a5056a687fd5aaaf4e3201729e492e91e98ea2a0e03. Jun 25 16:34:20.695000 audit: BPF prog-id=129 op=LOAD Jun 25 16:34:20.696932 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:34:20.699696 kernel: audit: type=1334 audit(1719333260.695:491): prog-id=129 op=LOAD Jun 25 16:34:20.699720 kernel: audit: type=1300 audit(1719333260.695:491): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2874 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:20.699770 kernel: audit: type=1327 audit(1719333260.695:491): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663061656665363162653163613562306163326135303536613638 Jun 25 16:34:20.695000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2874 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:20.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663061656665363162653163613562306163326135303536613638 Jun 25 16:34:20.696000 audit: BPF prog-id=130 op=LOAD Jun 25 16:34:20.701990 kernel: audit: type=1334 audit(1719333260.696:492): prog-id=130 op=LOAD Jun 25 16:34:20.702016 kernel: audit: type=1300 audit(1719333260.696:492): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2874 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:20.696000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2874 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:20.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663061656665363162653163613562306163326135303536613638 Jun 25 16:34:20.706321 kernel: audit: type=1327 audit(1719333260.696:492): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663061656665363162653163613562306163326135303536613638 Jun 25 16:34:20.706351 kernel: audit: type=1334 audit(1719333260.696:493): prog-id=130 op=UNLOAD Jun 25 16:34:20.696000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:34:20.706811 kernel: audit: type=1334 audit(1719333260.696:494): prog-id=129 op=UNLOAD Jun 25 16:34:20.696000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:34:20.696000 audit: BPF prog-id=131 op=LOAD Jun 25 16:34:20.708881 kernel: audit: type=1334 audit(1719333260.696:495): prog-id=131 op=LOAD Jun 25 16:34:20.708911 kernel: audit: type=1300 audit(1719333260.696:495): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2874 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:20.696000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2874 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:20.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236663061656665363162653163613562306163326135303536613638 Jun 25 16:34:20.711339 containerd[1344]: time="2024-06-25T16:34:20.711319341Z" level=info msg="StartContainer for \"b6f0aefe61be1ca5b0ac2a5056a687fd5aaaf4e3201729e492e91e98ea2a0e03\" returns successfully" Jun 25 16:34:20.766068 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:34:20.766136 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:34:21.649321 kubelet[2425]: I0625 16:34:21.649302 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-p67fj" podStartSLOduration=2.048623631 podCreationTimestamp="2024-06-25 16:34:07 +0000 UTC" firstStartedPulling="2024-06-25 16:34:07.925981655 +0000 UTC m=+19.594409691" lastFinishedPulling="2024-06-25 16:34:20.502865387 +0000 UTC m=+32.171293425" observedRunningTime="2024-06-25 16:34:21.621538929 +0000 UTC m=+33.289966973" watchObservedRunningTime="2024-06-25 16:34:21.625507365 +0000 UTC m=+33.293935407" Jun 25 16:34:21.980000 audit[3490]: AVC avc: denied { write } for pid=3490 comm="tee" name="fd" dev="proc" ino=30592 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:34:21.980000 audit[3490]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe772fca2f a2=241 a3=1b6 items=1 ppid=3453 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:21.980000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:34:21.980000 audit: PATH item=0 name="/dev/fd/63" inode=30587 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:34:21.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:34:21.986000 audit[3479]: AVC avc: denied { write } for pid=3479 comm="tee" name="fd" dev="proc" ino=30853 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:34:21.986000 audit[3479]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd54a8a2e a2=241 a3=1b6 items=1 ppid=3443 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:21.986000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:34:21.986000 audit: PATH item=0 name="/dev/fd/63" inode=30842 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:34:21.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:34:22.008000 audit[3509]: AVC avc: denied { write } for pid=3509 comm="tee" name="fd" dev="proc" ino=30878 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:34:22.011000 audit[3511]: AVC avc: denied { write } for pid=3511 comm="tee" name="fd" dev="proc" ino=30881 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:34:22.008000 audit[3509]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa75f3a30 a2=241 a3=1b6 items=1 ppid=3442 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:22.008000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:34:22.008000 audit: PATH item=0 name="/dev/fd/63" inode=30870 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:34:22.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:34:22.011000 audit[3511]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbd87ea1f a2=241 a3=1b6 items=1 ppid=3448 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:22.011000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:34:22.011000 audit: PATH item=0 name="/dev/fd/63" inode=30873 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:34:22.011000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:34:22.013000 audit[3513]: AVC avc: denied { write } for pid=3513 comm="tee" name="fd" dev="proc" ino=30886 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:34:22.016000 audit[3504]: AVC avc: denied { write } for pid=3504 comm="tee" name="fd" dev="proc" ino=30889 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:34:22.013000 audit[3513]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb853fa1e a2=241 a3=1b6 items=1 ppid=3454 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:22.013000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:34:22.013000 audit: PATH item=0 name="/dev/fd/63" inode=30599 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:34:22.013000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:34:22.016000 audit[3504]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc633cea2e a2=241 a3=1b6 items=1 ppid=3446 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:22.016000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:34:22.016000 audit: PATH item=0 name="/dev/fd/63" inode=30596 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:34:22.016000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:34:22.021000 audit[3507]: AVC avc: denied { write } for pid=3507 comm="tee" name="fd" dev="proc" ino=30604 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:34:22.021000 audit[3507]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd5eec0a2e a2=241 a3=1b6 items=1 ppid=3450 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:22.021000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:34:22.021000 audit: PATH item=0 name="/dev/fd/63" inode=30867 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:34:22.021000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:34:22.628814 kubelet[2425]: I0625 16:34:22.628795 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:34:26.023440 kubelet[2425]: I0625 16:34:26.023077 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:34:26.095000 audit[3589]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:26.097395 kernel: kauditd_printk_skb: 36 callbacks suppressed Jun 25 16:34:26.097435 kernel: audit: type=1325 audit(1719333266.095:503): table=filter:95 family=2 entries=15 op=nft_register_rule pid=3589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:26.097454 kernel: audit: type=1300 audit(1719333266.095:503): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe32b040a0 a2=0 a3=7ffe32b0408c items=0 ppid=2604 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.095000 audit[3589]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe32b040a0 a2=0 a3=7ffe32b0408c items=0 ppid=2604 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:26.100572 kernel: audit: type=1327 audit(1719333266.095:503): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:26.095000 audit[3589]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:26.095000 audit[3589]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe32b040a0 a2=0 a3=7ffe32b0408c items=0 ppid=2604 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.105648 kernel: audit: type=1325 audit(1719333266.095:504): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:26.105681 kernel: audit: type=1300 audit(1719333266.095:504): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe32b040a0 a2=0 a3=7ffe32b0408c items=0 ppid=2604 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.106485 kernel: audit: type=1327 audit(1719333266.095:504): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:26.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:26.494867 systemd-networkd[1156]: vxlan.calico: Link UP Jun 25 16:34:26.494872 systemd-networkd[1156]: vxlan.calico: Gained carrier Jun 25 16:34:26.509905 kernel: audit: type=1334 audit(1719333266.503:505): prog-id=132 op=LOAD Jun 25 16:34:26.509964 kernel: audit: type=1300 audit(1719333266.503:505): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1afa5b90 a2=70 a3=7f3a8e233000 items=0 ppid=3590 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.509984 kernel: audit: type=1327 audit(1719333266.503:505): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:34:26.510085 kernel: audit: type=1334 audit(1719333266.508:506): prog-id=132 op=UNLOAD Jun 25 16:34:26.503000 audit: BPF prog-id=132 op=LOAD Jun 25 16:34:26.503000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1afa5b90 a2=70 a3=7f3a8e233000 items=0 ppid=3590 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:34:26.508000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:34:26.508000 audit: BPF prog-id=133 op=LOAD Jun 25 16:34:26.508000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1afa5b90 a2=70 a3=6f items=0 ppid=3590 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.508000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:34:26.508000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:34:26.508000 audit: BPF prog-id=134 op=LOAD Jun 25 16:34:26.508000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1afa5b20 a2=70 a3=7fff1afa5b90 items=0 ppid=3590 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.508000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:34:26.508000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:34:26.509000 audit: BPF prog-id=135 op=LOAD Jun 25 16:34:26.509000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff1afa5b50 a2=70 a3=0 items=0 ppid=3590 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.509000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:34:26.543000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:34:26.642000 audit[3715]: NETFILTER_CFG table=raw:97 family=2 entries=19 op=nft_register_chain pid=3715 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:26.642000 audit[3715]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fff50ad33f0 a2=0 a3=7fff50ad33dc items=0 ppid=3590 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.642000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:26.659000 audit[3716]: NETFILTER_CFG table=mangle:98 family=2 entries=16 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:26.659000 audit[3716]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd5eb88670 a2=0 a3=7ffd5eb8865c items=0 ppid=3590 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.659000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:26.665000 audit[3719]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:26.665000 audit[3719]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc98ac8430 a2=0 a3=7ffc98ac841c items=0 ppid=3590 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.665000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:26.665000 audit[3718]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:26.665000 audit[3718]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffcb1e414a0 a2=0 a3=7ffcb1e4148c items=0 ppid=3590 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:26.665000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:27.783951 systemd-networkd[1156]: vxlan.calico: Gained IPv6LL Jun 25 16:34:28.382901 kubelet[2425]: I0625 16:34:28.382828 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:34:30.480610 containerd[1344]: time="2024-06-25T16:34:30.480423139Z" level=info msg="StopPodSandbox for \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\"" Jun 25 16:34:30.480610 containerd[1344]: time="2024-06-25T16:34:30.480566394Z" level=info msg="StopPodSandbox for \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\"" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.516 [INFO][3800] k8s.go 608: Cleaning up netns ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.517 [INFO][3800] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" iface="eth0" netns="/var/run/netns/cni-d864d343-2385-6a33-1cf3-a573854f7c7d" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.517 [INFO][3800] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" iface="eth0" netns="/var/run/netns/cni-d864d343-2385-6a33-1cf3-a573854f7c7d" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.517 [INFO][3800] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" iface="eth0" netns="/var/run/netns/cni-d864d343-2385-6a33-1cf3-a573854f7c7d" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.517 [INFO][3800] k8s.go 615: Releasing IP address(es) ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.517 [INFO][3800] utils.go 188: Calico CNI releasing IP address ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.706 [INFO][3812] ipam_plugin.go 411: Releasing address using handleID ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.708 [INFO][3812] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.709 [INFO][3812] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.716 [WARNING][3812] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.716 [INFO][3812] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.718 [INFO][3812] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:30.720478 containerd[1344]: 2024-06-25 16:34:30.718 [INFO][3800] k8s.go 621: Teardown processing complete. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:30.721097 containerd[1344]: time="2024-06-25T16:34:30.721063839Z" level=info msg="TearDown network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\" successfully" Jun 25 16:34:30.721164 containerd[1344]: time="2024-06-25T16:34:30.721150990Z" level=info msg="StopPodSandbox for \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\" returns successfully" Jun 25 16:34:30.722928 systemd[1]: run-netns-cni\x2dd864d343\x2d2385\x2d6a33\x2d1cf3\x2da573854f7c7d.mount: Deactivated successfully. Jun 25 16:34:30.724395 containerd[1344]: time="2024-06-25T16:34:30.723531653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678cc79c9f-99468,Uid:12e884cf-b0a0-408e-9cda-1efd4981caa7,Namespace:calico-system,Attempt:1,}" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.523 [INFO][3801] k8s.go 608: Cleaning up netns ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.524 [INFO][3801] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" iface="eth0" netns="/var/run/netns/cni-5256ad17-298a-adb2-814f-ea788dba52ec" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.524 [INFO][3801] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" iface="eth0" netns="/var/run/netns/cni-5256ad17-298a-adb2-814f-ea788dba52ec" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.524 [INFO][3801] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" iface="eth0" netns="/var/run/netns/cni-5256ad17-298a-adb2-814f-ea788dba52ec" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.524 [INFO][3801] k8s.go 615: Releasing IP address(es) ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.524 [INFO][3801] utils.go 188: Calico CNI releasing IP address ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.706 [INFO][3813] ipam_plugin.go 411: Releasing address using handleID ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.708 [INFO][3813] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.718 [INFO][3813] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.725 [WARNING][3813] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.725 [INFO][3813] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.726 [INFO][3813] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:30.729920 containerd[1344]: 2024-06-25 16:34:30.728 [INFO][3801] k8s.go 621: Teardown processing complete. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:30.731736 systemd[1]: run-netns-cni\x2d5256ad17\x2d298a\x2dadb2\x2d814f\x2dea788dba52ec.mount: Deactivated successfully. Jun 25 16:34:30.733486 containerd[1344]: time="2024-06-25T16:34:30.733461533Z" level=info msg="TearDown network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\" successfully" Jun 25 16:34:30.733540 containerd[1344]: time="2024-06-25T16:34:30.733529303Z" level=info msg="StopPodSandbox for \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\" returns successfully" Jun 25 16:34:30.733963 containerd[1344]: time="2024-06-25T16:34:30.733944225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g4hfm,Uid:b19c0974-d630-40fb-96d8-9884cf02a803,Namespace:kube-system,Attempt:1,}" Jun 25 16:34:30.842833 systemd-networkd[1156]: cali2e77d959bf6: Link UP Jun 25 16:34:30.845276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:34:30.845316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2e77d959bf6: link becomes ready Jun 25 16:34:30.845392 systemd-networkd[1156]: cali2e77d959bf6: Gained carrier Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.789 [INFO][3824] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0 calico-kube-controllers-678cc79c9f- calico-system 12e884cf-b0a0-408e-9cda-1efd4981caa7 664 0 2024-06-25 16:34:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:678cc79c9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-678cc79c9f-99468 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2e77d959bf6 [] []}} ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.789 [INFO][3824] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.815 [INFO][3847] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" HandleID="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.821 [INFO][3847] ipam_plugin.go 264: Auto assigning IP ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" HandleID="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-678cc79c9f-99468", "timestamp":"2024-06-25 16:34:30.815413366 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.821 [INFO][3847] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.821 [INFO][3847] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.821 [INFO][3847] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.822 [INFO][3847] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.828 [INFO][3847] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.831 [INFO][3847] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.832 [INFO][3847] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.833 [INFO][3847] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.833 [INFO][3847] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.834 [INFO][3847] ipam.go 1685: Creating new handle: k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.836 [INFO][3847] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.838 [INFO][3847] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.838 [INFO][3847] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" host="localhost" Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.838 [INFO][3847] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:30.858876 containerd[1344]: 2024-06-25 16:34:30.838 [INFO][3847] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" HandleID="k8s-pod-network.ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.861281 containerd[1344]: 2024-06-25 16:34:30.839 [INFO][3824] k8s.go 386: Populated endpoint ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0", GenerateName:"calico-kube-controllers-678cc79c9f-", Namespace:"calico-system", SelfLink:"", UID:"12e884cf-b0a0-408e-9cda-1efd4981caa7", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678cc79c9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-678cc79c9f-99468", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77d959bf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:30.861281 containerd[1344]: 2024-06-25 16:34:30.839 [INFO][3824] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.861281 containerd[1344]: 2024-06-25 16:34:30.839 [INFO][3824] dataplane_linux.go 68: Setting the host side veth name to cali2e77d959bf6 ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.861281 containerd[1344]: 2024-06-25 16:34:30.845 [INFO][3824] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.861281 containerd[1344]: 2024-06-25 16:34:30.849 [INFO][3824] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0", GenerateName:"calico-kube-controllers-678cc79c9f-", Namespace:"calico-system", SelfLink:"", UID:"12e884cf-b0a0-408e-9cda-1efd4981caa7", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678cc79c9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc", Pod:"calico-kube-controllers-678cc79c9f-99468", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77d959bf6", MAC:"ce:46:7d:be:54:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:30.861281 containerd[1344]: 2024-06-25 16:34:30.856 [INFO][3824] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc" Namespace="calico-system" Pod="calico-kube-controllers-678cc79c9f-99468" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:30.873847 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali02578a52c66: link becomes ready Jun 25 16:34:30.874173 systemd-networkd[1156]: cali02578a52c66: Link UP Jun 25 16:34:30.874272 systemd-networkd[1156]: cali02578a52c66: Gained carrier Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.806 [INFO][3834] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--g4hfm-eth0 coredns-5dd5756b68- kube-system b19c0974-d630-40fb-96d8-9884cf02a803 665 0 2024-06-25 16:34:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-g4hfm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02578a52c66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.807 [INFO][3834] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.827 [INFO][3853] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" HandleID="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.834 [INFO][3853] ipam_plugin.go 264: Auto assigning IP ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" HandleID="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267e40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-g4hfm", "timestamp":"2024-06-25 16:34:30.827414133 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.834 [INFO][3853] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.838 [INFO][3853] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.838 [INFO][3853] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.839 [INFO][3853] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.844 [INFO][3853] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.853 [INFO][3853] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.856 [INFO][3853] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.858 [INFO][3853] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.858 [INFO][3853] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.858 [INFO][3853] ipam.go 1685: Creating new handle: k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483 Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.860 [INFO][3853] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.864 [INFO][3853] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.864 [INFO][3853] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" host="localhost" Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.864 [INFO][3853] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:30.879665 containerd[1344]: 2024-06-25 16:34:30.864 [INFO][3853] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" HandleID="k8s-pod-network.26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.880125 containerd[1344]: 2024-06-25 16:34:30.866 [INFO][3834] k8s.go 386: Populated endpoint ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--g4hfm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b19c0974-d630-40fb-96d8-9884cf02a803", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-g4hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02578a52c66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:30.880125 containerd[1344]: 2024-06-25 16:34:30.866 [INFO][3834] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.880125 containerd[1344]: 2024-06-25 16:34:30.866 [INFO][3834] dataplane_linux.go 68: Setting the host side veth name to cali02578a52c66 ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.880125 containerd[1344]: 2024-06-25 16:34:30.870 [INFO][3834] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.880125 containerd[1344]: 2024-06-25 16:34:30.870 [INFO][3834] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--g4hfm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b19c0974-d630-40fb-96d8-9884cf02a803", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483", Pod:"coredns-5dd5756b68-g4hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02578a52c66", MAC:"6e:a2:23:04:ff:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:30.880125 containerd[1344]: 2024-06-25 16:34:30.877 [INFO][3834] k8s.go 500: Wrote updated endpoint to datastore ContainerID="26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483" Namespace="kube-system" Pod="coredns-5dd5756b68-g4hfm" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:30.881000 audit[3876]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:30.881000 audit[3876]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffece2174a0 a2=0 a3=7ffece21748c items=0 ppid=3590 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.881000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:30.896000 audit[3900]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:30.896000 audit[3900]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7fff688075f0 a2=0 a3=7fff688075dc items=0 ppid=3590 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.896000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:30.902106 containerd[1344]: time="2024-06-25T16:34:30.901966074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:30.902106 containerd[1344]: time="2024-06-25T16:34:30.902005918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:30.902106 containerd[1344]: time="2024-06-25T16:34:30.902018342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:30.902106 containerd[1344]: time="2024-06-25T16:34:30.902026163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:30.904124 containerd[1344]: time="2024-06-25T16:34:30.904035442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:30.904124 containerd[1344]: time="2024-06-25T16:34:30.904066001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:30.904124 containerd[1344]: time="2024-06-25T16:34:30.904078098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:30.904124 containerd[1344]: time="2024-06-25T16:34:30.904086346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:30.913860 systemd[1]: Started cri-containerd-ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc.scope - libcontainer container ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc. Jun 25 16:34:30.920444 systemd[1]: Started cri-containerd-26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483.scope - libcontainer container 26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483. Jun 25 16:34:30.922000 audit: BPF prog-id=136 op=LOAD Jun 25 16:34:30.922000 audit: BPF prog-id=137 op=LOAD Jun 25 16:34:30.922000 audit[3926]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3899 pid=3926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363623032333863373666356233653964633036623938393433623239 Jun 25 16:34:30.922000 audit: BPF prog-id=138 op=LOAD Jun 25 16:34:30.922000 audit[3926]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3899 pid=3926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363623032333863373666356233653964633036623938393433623239 Jun 25 16:34:30.922000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:34:30.922000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:34:30.922000 audit: BPF prog-id=139 op=LOAD Jun 25 16:34:30.922000 audit[3926]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3899 pid=3926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363623032333863373666356233653964633036623938393433623239 Jun 25 16:34:30.924576 systemd-resolved[1285]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:34:30.928000 audit: BPF prog-id=140 op=LOAD Jun 25 16:34:30.928000 audit: BPF prog-id=141 op=LOAD Jun 25 16:34:30.928000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3910 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656466333362666339613434356461356164383161383165363430 Jun 25 16:34:30.928000 audit: BPF prog-id=142 op=LOAD Jun 25 16:34:30.928000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3910 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656466333362666339613434356461356164383161383165363430 Jun 25 16:34:30.928000 audit: BPF prog-id=142 op=UNLOAD Jun 25 16:34:30.928000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:34:30.928000 audit: BPF prog-id=143 op=LOAD Jun 25 16:34:30.928000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3910 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:30.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656466333362666339613434356461356164383161383165363430 Jun 25 16:34:30.930351 systemd-resolved[1285]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:34:30.954623 containerd[1344]: time="2024-06-25T16:34:30.954599920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678cc79c9f-99468,Uid:12e884cf-b0a0-408e-9cda-1efd4981caa7,Namespace:calico-system,Attempt:1,} returns sandbox id \"ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc\"" Jun 25 16:34:30.955907 containerd[1344]: time="2024-06-25T16:34:30.955886735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g4hfm,Uid:b19c0974-d630-40fb-96d8-9884cf02a803,Namespace:kube-system,Attempt:1,} returns sandbox id \"26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483\"" Jun 25 16:34:30.956948 containerd[1344]: time="2024-06-25T16:34:30.956878477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:34:30.969517 containerd[1344]: time="2024-06-25T16:34:30.969497368Z" level=info msg="CreateContainer within sandbox \"26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:34:30.999196 containerd[1344]: time="2024-06-25T16:34:30.999149548Z" level=info msg="CreateContainer within sandbox \"26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4cbae438a53dcb79dac8d385258e7b137137ecaa8e0e36d7d0e006703a95ade5\"" Jun 25 16:34:31.000667 containerd[1344]: time="2024-06-25T16:34:31.000652426Z" level=info msg="StartContainer for \"4cbae438a53dcb79dac8d385258e7b137137ecaa8e0e36d7d0e006703a95ade5\"" Jun 25 16:34:31.018818 systemd[1]: Started cri-containerd-4cbae438a53dcb79dac8d385258e7b137137ecaa8e0e36d7d0e006703a95ade5.scope - libcontainer container 4cbae438a53dcb79dac8d385258e7b137137ecaa8e0e36d7d0e006703a95ade5. Jun 25 16:34:31.024000 audit: BPF prog-id=144 op=LOAD Jun 25 16:34:31.024000 audit: BPF prog-id=145 op=LOAD Jun 25 16:34:31.024000 audit[3984]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3910 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463626165343338613533646362373964616338643338353235386537 Jun 25 16:34:31.024000 audit: BPF prog-id=146 op=LOAD Jun 25 16:34:31.024000 audit[3984]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3910 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463626165343338613533646362373964616338643338353235386537 Jun 25 16:34:31.024000 audit: BPF prog-id=146 op=UNLOAD Jun 25 16:34:31.024000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:34:31.024000 audit: BPF prog-id=147 op=LOAD Jun 25 16:34:31.024000 audit[3984]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3910 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463626165343338613533646362373964616338643338353235386537 Jun 25 16:34:31.032406 containerd[1344]: time="2024-06-25T16:34:31.032380169Z" level=info msg="StartContainer for \"4cbae438a53dcb79dac8d385258e7b137137ecaa8e0e36d7d0e006703a95ade5\" returns successfully" Jun 25 16:34:31.479621 containerd[1344]: time="2024-06-25T16:34:31.479595478Z" level=info msg="StopPodSandbox for \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\"" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.511 [INFO][4027] k8s.go 608: Cleaning up netns ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.511 [INFO][4027] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" iface="eth0" netns="/var/run/netns/cni-401fa345-5a6e-a532-094e-7a4837141fb6" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.511 [INFO][4027] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" iface="eth0" netns="/var/run/netns/cni-401fa345-5a6e-a532-094e-7a4837141fb6" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.511 [INFO][4027] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" iface="eth0" netns="/var/run/netns/cni-401fa345-5a6e-a532-094e-7a4837141fb6" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.511 [INFO][4027] k8s.go 615: Releasing IP address(es) ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.511 [INFO][4027] utils.go 188: Calico CNI releasing IP address ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.530 [INFO][4033] ipam_plugin.go 411: Releasing address using handleID ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.530 [INFO][4033] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.530 [INFO][4033] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.533 [WARNING][4033] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.533 [INFO][4033] ipam_plugin.go 439: Releasing address using workloadID ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.534 [INFO][4033] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:31.536131 containerd[1344]: 2024-06-25 16:34:31.535 [INFO][4027] k8s.go 621: Teardown processing complete. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:31.536572 containerd[1344]: time="2024-06-25T16:34:31.536255877Z" level=info msg="TearDown network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\" successfully" Jun 25 16:34:31.536572 containerd[1344]: time="2024-06-25T16:34:31.536275291Z" level=info msg="StopPodSandbox for \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\" returns successfully" Jun 25 16:34:31.536735 containerd[1344]: time="2024-06-25T16:34:31.536712434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-glphj,Uid:d80999f6-e8ab-41e9-805a-94aad5dbb65d,Namespace:kube-system,Attempt:1,}" Jun 25 16:34:31.606234 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali43f37aceffd: link becomes ready Jun 25 16:34:31.604634 systemd-networkd[1156]: cali43f37aceffd: Link UP Jun 25 16:34:31.605561 systemd-networkd[1156]: cali43f37aceffd: Gained carrier Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.563 [INFO][4039] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--glphj-eth0 coredns-5dd5756b68- kube-system d80999f6-e8ab-41e9-805a-94aad5dbb65d 679 0 2024-06-25 16:34:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-glphj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43f37aceffd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.563 [INFO][4039] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.580 [INFO][4051] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" HandleID="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.584 [INFO][4051] ipam_plugin.go 264: Auto assigning IP ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" HandleID="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efde0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-glphj", "timestamp":"2024-06-25 16:34:31.580099968 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.584 [INFO][4051] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.584 [INFO][4051] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.585 [INFO][4051] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.585 [INFO][4051] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.587 [INFO][4051] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.594 [INFO][4051] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.595 [INFO][4051] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.596 [INFO][4051] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.596 [INFO][4051] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.597 [INFO][4051] ipam.go 1685: Creating new handle: k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5 Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.599 [INFO][4051] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.601 [INFO][4051] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.601 [INFO][4051] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" host="localhost" Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.601 [INFO][4051] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:31.617325 containerd[1344]: 2024-06-25 16:34:31.601 [INFO][4051] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" HandleID="k8s-pod-network.040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.617780 containerd[1344]: 2024-06-25 16:34:31.602 [INFO][4039] k8s.go 386: Populated endpoint ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--glphj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d80999f6-e8ab-41e9-805a-94aad5dbb65d", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-glphj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43f37aceffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:31.617780 containerd[1344]: 2024-06-25 16:34:31.602 [INFO][4039] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.617780 containerd[1344]: 2024-06-25 16:34:31.603 [INFO][4039] dataplane_linux.go 68: Setting the host side veth name to cali43f37aceffd ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.617780 containerd[1344]: 2024-06-25 16:34:31.606 [INFO][4039] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.617780 containerd[1344]: 2024-06-25 16:34:31.610 [INFO][4039] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--glphj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d80999f6-e8ab-41e9-805a-94aad5dbb65d", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5", Pod:"coredns-5dd5756b68-glphj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43f37aceffd", MAC:"ce:15:dc:b9:5e:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:31.617780 containerd[1344]: 2024-06-25 16:34:31.616 [INFO][4039] k8s.go 500: Wrote updated endpoint to datastore ContainerID="040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5" Namespace="kube-system" Pod="coredns-5dd5756b68-glphj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:31.634993 containerd[1344]: time="2024-06-25T16:34:31.634943680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:31.635078 containerd[1344]: time="2024-06-25T16:34:31.634998988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:31.635078 containerd[1344]: time="2024-06-25T16:34:31.635016159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:31.635078 containerd[1344]: time="2024-06-25T16:34:31.635029370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:31.645833 systemd[1]: Started cri-containerd-040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5.scope - libcontainer container 040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5. Jun 25 16:34:31.649438 kernel: kauditd_printk_skb: 66 callbacks suppressed Jun 25 16:34:31.649490 kernel: audit: type=1325 audit(1719333271.646:537): table=filter:103 family=2 entries=34 op=nft_register_chain pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:31.646000 audit[4098]: NETFILTER_CFG table=filter:103 family=2 entries=34 op=nft_register_chain pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:31.651955 kernel: audit: type=1300 audit(1719333271.646:537): arch=c000003e syscall=46 success=yes exit=18220 a0=3 a1=7ffeda910ca0 a2=0 a3=7ffeda910c8c items=0 ppid=3590 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.652010 kernel: audit: type=1327 audit(1719333271.646:537): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:31.646000 audit[4098]: SYSCALL arch=c000003e syscall=46 success=yes exit=18220 a0=3 a1=7ffeda910ca0 a2=0 a3=7ffeda910c8c items=0 ppid=3590 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.646000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:31.658000 audit: BPF prog-id=148 op=LOAD Jun 25 16:34:31.659000 audit: BPF prog-id=149 op=LOAD Jun 25 16:34:31.660842 kernel: audit: type=1334 audit(1719333271.658:538): prog-id=148 op=LOAD Jun 25 16:34:31.660874 kernel: audit: type=1334 audit(1719333271.659:539): prog-id=149 op=LOAD Jun 25 16:34:31.660897 kernel: audit: type=1300 audit(1719333271.659:539): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.662939 kernel: audit: type=1327 audit(1719333271.659:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306365386466366265666131376339383934366539663533353036 Jun 25 16:34:31.659000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306365386466366265666131376339383934366539663533353036 Jun 25 16:34:31.664884 kernel: audit: type=1334 audit(1719333271.659:540): prog-id=150 op=LOAD Jun 25 16:34:31.659000 audit: BPF prog-id=150 op=LOAD Jun 25 16:34:31.659000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.667521 kernel: audit: type=1300 audit(1719333271.659:540): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.667860 kubelet[2425]: I0625 16:34:31.667843 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-g4hfm" podStartSLOduration=29.667815256 podCreationTimestamp="2024-06-25 16:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:34:31.638772625 +0000 UTC m=+43.307200665" watchObservedRunningTime="2024-06-25 16:34:31.667815256 +0000 UTC m=+43.336243295" Jun 25 16:34:31.668408 systemd-resolved[1285]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:34:31.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306365386466366265666131376339383934366539663533353036 Jun 25 16:34:31.670797 kernel: audit: type=1327 audit(1719333271.659:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306365386466366265666131376339383934366539663533353036 Jun 25 16:34:31.659000 audit: BPF prog-id=150 op=UNLOAD Jun 25 16:34:31.659000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:34:31.659000 audit: BPF prog-id=151 op=LOAD Jun 25 16:34:31.659000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306365386466366265666131376339383934366539663533353036 Jun 25 16:34:31.697000 audit[4106]: NETFILTER_CFG table=filter:104 family=2 entries=14 op=nft_register_rule pid=4106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:31.697000 audit[4106]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd0ccb38d0 a2=0 a3=7ffd0ccb38bc items=0 ppid=2604 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:31.699000 audit[4106]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:31.699000 audit[4106]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd0ccb38d0 a2=0 a3=0 items=0 ppid=2604 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:31.704770 containerd[1344]: time="2024-06-25T16:34:31.704734344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-glphj,Uid:d80999f6-e8ab-41e9-805a-94aad5dbb65d,Namespace:kube-system,Attempt:1,} returns sandbox id \"040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5\"" Jun 25 16:34:31.707587 containerd[1344]: time="2024-06-25T16:34:31.706763602Z" level=info msg="CreateContainer within sandbox \"040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:34:31.712176 containerd[1344]: time="2024-06-25T16:34:31.712134119Z" level=info msg="CreateContainer within sandbox \"040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05548e219c4ded3e048d89e450380174b87042a2ce73ed12121765d139d97e2e\"" Jun 25 16:34:31.712596 containerd[1344]: time="2024-06-25T16:34:31.712582888Z" level=info msg="StartContainer for \"05548e219c4ded3e048d89e450380174b87042a2ce73ed12121765d139d97e2e\"" Jun 25 16:34:31.730081 systemd[1]: run-netns-cni\x2d401fa345\x2d5a6e\x2da532\x2d094e\x2d7a4837141fb6.mount: Deactivated successfully. Jun 25 16:34:31.737829 systemd[1]: Started cri-containerd-05548e219c4ded3e048d89e450380174b87042a2ce73ed12121765d139d97e2e.scope - libcontainer container 05548e219c4ded3e048d89e450380174b87042a2ce73ed12121765d139d97e2e. Jun 25 16:34:31.744000 audit[4126]: NETFILTER_CFG table=filter:106 family=2 entries=11 op=nft_register_rule pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:31.744000 audit[4126]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff1a7d34b0 a2=0 a3=7fff1a7d349c items=0 ppid=2604 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:31.744000 audit: BPF prog-id=152 op=LOAD Jun 25 16:34:31.745000 audit: BPF prog-id=153 op=LOAD Jun 25 16:34:31.745000 audit[4127]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4078 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035353438653231396334646564336530343864383965343530333830 Jun 25 16:34:31.745000 audit: BPF prog-id=154 op=LOAD Jun 25 16:34:31.745000 audit[4127]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4078 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035353438653231396334646564336530343864383965343530333830 Jun 25 16:34:31.745000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:34:31.745000 audit: BPF prog-id=153 op=UNLOAD Jun 25 16:34:31.745000 audit: BPF prog-id=155 op=LOAD Jun 25 16:34:31.745000 audit[4127]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4078 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035353438653231396334646564336530343864383965343530333830 Jun 25 16:34:31.744000 audit[4126]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:31.744000 audit[4126]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff1a7d34b0 a2=0 a3=7fff1a7d349c items=0 ppid=2604 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:31.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:31.754795 containerd[1344]: time="2024-06-25T16:34:31.754766138Z" level=info msg="StartContainer for \"05548e219c4ded3e048d89e450380174b87042a2ce73ed12121765d139d97e2e\" returns successfully" Jun 25 16:34:32.328117 systemd-networkd[1156]: cali2e77d959bf6: Gained IPv6LL Jun 25 16:34:32.391876 systemd-networkd[1156]: cali02578a52c66: Gained IPv6LL Jun 25 16:34:32.480530 containerd[1344]: time="2024-06-25T16:34:32.480494025Z" level=info msg="StopPodSandbox for \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\"" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.539 [INFO][4175] k8s.go 608: Cleaning up netns ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.539 [INFO][4175] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" iface="eth0" netns="/var/run/netns/cni-706e1e1c-31fa-31f3-6276-f1e986259b1d" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.539 [INFO][4175] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" iface="eth0" netns="/var/run/netns/cni-706e1e1c-31fa-31f3-6276-f1e986259b1d" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.539 [INFO][4175] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" iface="eth0" netns="/var/run/netns/cni-706e1e1c-31fa-31f3-6276-f1e986259b1d" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.539 [INFO][4175] k8s.go 615: Releasing IP address(es) ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.539 [INFO][4175] utils.go 188: Calico CNI releasing IP address ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.560 [INFO][4181] ipam_plugin.go 411: Releasing address using handleID ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.560 [INFO][4181] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.560 [INFO][4181] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.563 [WARNING][4181] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.563 [INFO][4181] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.563 [INFO][4181] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:32.566054 containerd[1344]: 2024-06-25 16:34:32.564 [INFO][4175] k8s.go 621: Teardown processing complete. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:32.569386 systemd[1]: run-netns-cni\x2d706e1e1c\x2d31fa\x2d31f3\x2d6276\x2df1e986259b1d.mount: Deactivated successfully. Jun 25 16:34:32.574972 containerd[1344]: time="2024-06-25T16:34:32.570011089Z" level=info msg="TearDown network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\" successfully" Jun 25 16:34:32.574972 containerd[1344]: time="2024-06-25T16:34:32.570031534Z" level=info msg="StopPodSandbox for \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\" returns successfully" Jun 25 16:34:32.574972 containerd[1344]: time="2024-06-25T16:34:32.570428055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6zw6,Uid:8da5006e-233c-4549-81b3-ec063a911736,Namespace:calico-system,Attempt:1,}" Jun 25 16:34:32.676981 kubelet[2425]: I0625 16:34:32.676820 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-glphj" podStartSLOduration=30.676798245 podCreationTimestamp="2024-06-25 16:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:34:32.669830359 +0000 UTC m=+44.338258403" watchObservedRunningTime="2024-06-25 16:34:32.676798245 +0000 UTC m=+44.345226284" Jun 25 16:34:32.677000 audit[4204]: NETFILTER_CFG table=filter:108 family=2 entries=8 op=nft_register_rule pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:32.677000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc0dc2b2c0 a2=0 a3=7ffc0dc2b2ac items=0 ppid=2604 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:32.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:32.678000 audit[4204]: NETFILTER_CFG table=nat:109 family=2 entries=44 op=nft_register_rule pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:32.678000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc0dc2b2c0 a2=0 a3=7ffc0dc2b2ac items=0 ppid=2604 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:32.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:32.729761 systemd-networkd[1156]: cali37d3952c27c: Link UP Jun 25 16:34:32.730761 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:34:32.730929 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali37d3952c27c: link becomes ready Jun 25 16:34:32.731077 systemd-networkd[1156]: cali37d3952c27c: Gained carrier Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.663 [INFO][4190] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--z6zw6-eth0 csi-node-driver- calico-system 8da5006e-233c-4549-81b3-ec063a911736 698 0 2024-06-25 16:34:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-z6zw6 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali37d3952c27c [] []}} ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.663 [INFO][4190] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.694 [INFO][4205] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" HandleID="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.704 [INFO][4205] ipam_plugin.go 264: Auto assigning IP ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" HandleID="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-z6zw6", "timestamp":"2024-06-25 16:34:32.694585955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.704 [INFO][4205] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.704 [INFO][4205] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.704 [INFO][4205] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.705 [INFO][4205] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.715 [INFO][4205] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.718 [INFO][4205] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.719 [INFO][4205] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.720 [INFO][4205] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.720 [INFO][4205] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.721 [INFO][4205] ipam.go 1685: Creating new handle: k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.723 [INFO][4205] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.727 [INFO][4205] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.727 [INFO][4205] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" host="localhost" Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.727 [INFO][4205] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:32.744227 containerd[1344]: 2024-06-25 16:34:32.727 [INFO][4205] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" HandleID="k8s-pod-network.f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.745077 containerd[1344]: 2024-06-25 16:34:32.728 [INFO][4190] k8s.go 386: Populated endpoint ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z6zw6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5006e-233c-4549-81b3-ec063a911736", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-z6zw6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali37d3952c27c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:32.745077 containerd[1344]: 2024-06-25 16:34:32.728 [INFO][4190] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.745077 containerd[1344]: 2024-06-25 16:34:32.728 [INFO][4190] dataplane_linux.go 68: Setting the host side veth name to cali37d3952c27c ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.745077 containerd[1344]: 2024-06-25 16:34:32.731 [INFO][4190] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.745077 containerd[1344]: 2024-06-25 16:34:32.731 [INFO][4190] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z6zw6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5006e-233c-4549-81b3-ec063a911736", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b", Pod:"csi-node-driver-z6zw6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali37d3952c27c", MAC:"76:4a:d2:38:36:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:32.745077 containerd[1344]: 2024-06-25 16:34:32.741 [INFO][4190] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b" Namespace="calico-system" Pod="csi-node-driver-z6zw6" WorkloadEndpoint="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:32.763000 audit[4240]: NETFILTER_CFG table=filter:110 family=2 entries=42 op=nft_register_chain pid=4240 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:32.763000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffd8718c670 a2=0 a3=7ffd8718c65c items=0 ppid=3590 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:32.763000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:32.765680 containerd[1344]: time="2024-06-25T16:34:32.765585666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:32.765680 containerd[1344]: time="2024-06-25T16:34:32.765621730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:32.765680 containerd[1344]: time="2024-06-25T16:34:32.765631755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:32.765680 containerd[1344]: time="2024-06-25T16:34:32.765637267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:32.779826 systemd[1]: Started cri-containerd-f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b.scope - libcontainer container f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b. Jun 25 16:34:32.786000 audit: BPF prog-id=156 op=LOAD Jun 25 16:34:32.786000 audit: BPF prog-id=157 op=LOAD Jun 25 16:34:32.786000 audit[4246]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4235 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:32.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396630626534386663316232303230333837616232613439383935 Jun 25 16:34:32.786000 audit: BPF prog-id=158 op=LOAD Jun 25 16:34:32.786000 audit[4246]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4235 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:32.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396630626534386663316232303230333837616232613439383935 Jun 25 16:34:32.786000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:34:32.786000 audit: BPF prog-id=157 op=UNLOAD Jun 25 16:34:32.786000 audit: BPF prog-id=159 op=LOAD Jun 25 16:34:32.786000 audit[4246]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4235 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:32.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396630626534386663316232303230333837616232613439383935 Jun 25 16:34:32.788040 systemd-resolved[1285]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:34:32.796833 containerd[1344]: time="2024-06-25T16:34:32.796787608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6zw6,Uid:8da5006e-233c-4549-81b3-ec063a911736,Namespace:calico-system,Attempt:1,} returns sandbox id \"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b\"" Jun 25 16:34:33.222488 containerd[1344]: time="2024-06-25T16:34:33.222406815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:33.223537 containerd[1344]: time="2024-06-25T16:34:33.223490689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:34:33.224301 containerd[1344]: time="2024-06-25T16:34:33.224235611Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:33.226039 containerd[1344]: time="2024-06-25T16:34:33.226014975Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:33.227157 containerd[1344]: time="2024-06-25T16:34:33.227128282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:33.227693 containerd[1344]: time="2024-06-25T16:34:33.227668293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.270761755s" Jun 25 16:34:33.227773 containerd[1344]: time="2024-06-25T16:34:33.227694770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:34:33.228607 containerd[1344]: time="2024-06-25T16:34:33.228366741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:34:33.247975 containerd[1344]: time="2024-06-25T16:34:33.247952156Z" level=info msg="CreateContainer within sandbox \"ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:34:33.255756 containerd[1344]: time="2024-06-25T16:34:33.255712117Z" level=info msg="CreateContainer within sandbox \"ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f47c1b9aee63ea9831b449233627be9b80bd5b212f2102522d75dc565747eabb\"" Jun 25 16:34:33.256813 containerd[1344]: time="2024-06-25T16:34:33.256798943Z" level=info msg="StartContainer for \"f47c1b9aee63ea9831b449233627be9b80bd5b212f2102522d75dc565747eabb\"" Jun 25 16:34:33.272151 systemd[1]: Started cri-containerd-f47c1b9aee63ea9831b449233627be9b80bd5b212f2102522d75dc565747eabb.scope - libcontainer container f47c1b9aee63ea9831b449233627be9b80bd5b212f2102522d75dc565747eabb. Jun 25 16:34:33.282000 audit: BPF prog-id=160 op=LOAD Jun 25 16:34:33.282000 audit: BPF prog-id=161 op=LOAD Jun 25 16:34:33.282000 audit[4283]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3899 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:33.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376331623961656536336561393833316234343932333336323762 Jun 25 16:34:33.282000 audit: BPF prog-id=162 op=LOAD Jun 25 16:34:33.282000 audit[4283]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3899 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:33.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376331623961656536336561393833316234343932333336323762 Jun 25 16:34:33.282000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:34:33.282000 audit: BPF prog-id=161 op=UNLOAD Jun 25 16:34:33.282000 audit: BPF prog-id=163 op=LOAD Jun 25 16:34:33.282000 audit[4283]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3899 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:33.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634376331623961656536336561393833316234343932333336323762 Jun 25 16:34:33.301652 containerd[1344]: time="2024-06-25T16:34:33.301625182Z" level=info msg="StartContainer for \"f47c1b9aee63ea9831b449233627be9b80bd5b212f2102522d75dc565747eabb\" returns successfully" Jun 25 16:34:33.543845 systemd-networkd[1156]: cali43f37aceffd: Gained IPv6LL Jun 25 16:34:33.659864 kubelet[2425]: I0625 16:34:33.659837 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-678cc79c9f-99468" podStartSLOduration=24.387946511 podCreationTimestamp="2024-06-25 16:34:07 +0000 UTC" firstStartedPulling="2024-06-25 16:34:30.956085974 +0000 UTC m=+42.624514009" lastFinishedPulling="2024-06-25 16:34:33.22794425 +0000 UTC m=+44.896372292" observedRunningTime="2024-06-25 16:34:33.657380183 +0000 UTC m=+45.325808227" watchObservedRunningTime="2024-06-25 16:34:33.659804794 +0000 UTC m=+45.328232839" Jun 25 16:34:33.698000 audit[4331]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:33.698000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdf68d04d0 a2=0 a3=7ffdf68d04bc items=0 ppid=2604 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:33.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:33.712000 audit[4331]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:33.712000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdf68d04d0 a2=0 a3=7ffdf68d04bc items=0 ppid=2604 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:33.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:33.769080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776306169.mount: Deactivated successfully. Jun 25 16:34:34.377108 systemd-networkd[1156]: cali37d3952c27c: Gained IPv6LL Jun 25 16:34:34.419213 containerd[1344]: time="2024-06-25T16:34:34.419190652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:34.419823 containerd[1344]: time="2024-06-25T16:34:34.419798074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:34:34.420242 containerd[1344]: time="2024-06-25T16:34:34.420087677Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:34.421115 containerd[1344]: time="2024-06-25T16:34:34.421100677Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:34.422138 containerd[1344]: time="2024-06-25T16:34:34.422123827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:34.424164 containerd[1344]: time="2024-06-25T16:34:34.424147938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.195758257s" Jun 25 16:34:34.424238 containerd[1344]: time="2024-06-25T16:34:34.424226938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:34:34.427409 containerd[1344]: time="2024-06-25T16:34:34.427376096Z" level=info msg="CreateContainer within sandbox \"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:34:34.434459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833160217.mount: Deactivated successfully. Jun 25 16:34:34.436316 containerd[1344]: time="2024-06-25T16:34:34.436296668Z" level=info msg="CreateContainer within sandbox \"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6f16dcd796009143a014102f885a2decf3afb8d7b736acc61eb24e468acfb6ab\"" Jun 25 16:34:34.436605 containerd[1344]: time="2024-06-25T16:34:34.436594425Z" level=info msg="StartContainer for \"6f16dcd796009143a014102f885a2decf3afb8d7b736acc61eb24e468acfb6ab\"" Jun 25 16:34:34.476814 systemd[1]: Started cri-containerd-6f16dcd796009143a014102f885a2decf3afb8d7b736acc61eb24e468acfb6ab.scope - libcontainer container 6f16dcd796009143a014102f885a2decf3afb8d7b736acc61eb24e468acfb6ab. Jun 25 16:34:34.485000 audit: BPF prog-id=164 op=LOAD Jun 25 16:34:34.485000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4235 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:34.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666313664636437393630303931343361303134313032663838356132 Jun 25 16:34:34.485000 audit: BPF prog-id=165 op=LOAD Jun 25 16:34:34.485000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4235 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:34.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666313664636437393630303931343361303134313032663838356132 Jun 25 16:34:34.485000 audit: BPF prog-id=165 op=UNLOAD Jun 25 16:34:34.485000 audit: BPF prog-id=164 op=UNLOAD Jun 25 16:34:34.485000 audit: BPF prog-id=166 op=LOAD Jun 25 16:34:34.485000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4235 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:34.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666313664636437393630303931343361303134313032663838356132 Jun 25 16:34:34.495446 containerd[1344]: time="2024-06-25T16:34:34.495425358Z" level=info msg="StartContainer for \"6f16dcd796009143a014102f885a2decf3afb8d7b736acc61eb24e468acfb6ab\" returns successfully" Jun 25 16:34:34.496804 containerd[1344]: time="2024-06-25T16:34:34.496789951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:34:34.768516 systemd[1]: run-containerd-runc-k8s.io-6f16dcd796009143a014102f885a2decf3afb8d7b736acc61eb24e468acfb6ab-runc.7OmVCX.mount: Deactivated successfully. Jun 25 16:34:35.716373 containerd[1344]: time="2024-06-25T16:34:35.716345094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:35.716811 containerd[1344]: time="2024-06-25T16:34:35.716782285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:34:35.716988 containerd[1344]: time="2024-06-25T16:34:35.716972299Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:35.717794 containerd[1344]: time="2024-06-25T16:34:35.717779360Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:35.718591 containerd[1344]: time="2024-06-25T16:34:35.718575286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:35.719075 containerd[1344]: time="2024-06-25T16:34:35.719057126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.222205664s" Jun 25 16:34:35.719108 containerd[1344]: time="2024-06-25T16:34:35.719076701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:34:35.720346 containerd[1344]: time="2024-06-25T16:34:35.720332486Z" level=info msg="CreateContainer within sandbox \"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:34:35.727040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061681951.mount: Deactivated successfully. Jun 25 16:34:35.738466 containerd[1344]: time="2024-06-25T16:34:35.738435680Z" level=info msg="CreateContainer within sandbox \"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"31205381273061a1c59868e06f5dbee25dcf9a45ec5e9223dbfc924f9de88535\"" Jun 25 16:34:35.738706 containerd[1344]: time="2024-06-25T16:34:35.738693870Z" level=info msg="StartContainer for \"31205381273061a1c59868e06f5dbee25dcf9a45ec5e9223dbfc924f9de88535\"" Jun 25 16:34:35.760814 systemd[1]: Started cri-containerd-31205381273061a1c59868e06f5dbee25dcf9a45ec5e9223dbfc924f9de88535.scope - libcontainer container 31205381273061a1c59868e06f5dbee25dcf9a45ec5e9223dbfc924f9de88535. Jun 25 16:34:35.767000 audit: BPF prog-id=167 op=LOAD Jun 25 16:34:35.767000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4235 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:35.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323035333831323733303631613163353938363865303666356462 Jun 25 16:34:35.767000 audit: BPF prog-id=168 op=LOAD Jun 25 16:34:35.767000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4235 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:35.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323035333831323733303631613163353938363865303666356462 Jun 25 16:34:35.767000 audit: BPF prog-id=168 op=UNLOAD Jun 25 16:34:35.767000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:34:35.767000 audit: BPF prog-id=169 op=LOAD Jun 25 16:34:35.767000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4235 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:35.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323035333831323733303631613163353938363865303666356462 Jun 25 16:34:35.800826 containerd[1344]: time="2024-06-25T16:34:35.800800239Z" level=info msg="StartContainer for \"31205381273061a1c59868e06f5dbee25dcf9a45ec5e9223dbfc924f9de88535\" returns successfully" Jun 25 16:34:36.670462 kubelet[2425]: I0625 16:34:36.670441 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-z6zw6" podStartSLOduration=26.748903445 podCreationTimestamp="2024-06-25 16:34:07 +0000 UTC" firstStartedPulling="2024-06-25 16:34:32.7976761 +0000 UTC m=+44.466104135" lastFinishedPulling="2024-06-25 16:34:35.719180651 +0000 UTC m=+47.387608687" observedRunningTime="2024-06-25 16:34:36.669980878 +0000 UTC m=+48.338408917" watchObservedRunningTime="2024-06-25 16:34:36.670407997 +0000 UTC m=+48.338836036" Jun 25 16:34:36.680410 kubelet[2425]: I0625 16:34:36.680387 2425 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:34:36.684585 kubelet[2425]: I0625 16:34:36.684552 2425 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:34:37.178000 audit[4430]: NETFILTER_CFG table=filter:113 family=2 entries=9 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.182381 kernel: kauditd_printk_skb: 90 callbacks suppressed Jun 25 16:34:37.182464 kernel: audit: type=1325 audit(1719333277.178:581): table=filter:113 family=2 entries=9 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.182501 kernel: audit: type=1300 audit(1719333277.178:581): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff60a76970 a2=0 a3=7fff60a7695c items=0 ppid=2604 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:37.178000 audit[4430]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff60a76970 a2=0 a3=7fff60a7695c items=0 ppid=2604 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:37.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:37.186011 kernel: audit: type=1327 audit(1719333277.178:581): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:37.185000 audit[4430]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.185000 audit[4430]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff60a76970 a2=0 a3=7fff60a7695c items=0 ppid=2604 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:37.188784 kernel: audit: type=1325 audit(1719333277.185:582): table=nat:114 family=2 entries=20 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.188813 kernel: audit: type=1300 audit(1719333277.185:582): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff60a76970 a2=0 a3=7fff60a7695c items=0 ppid=2604 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:37.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:37.191988 kernel: audit: type=1327 audit(1719333277.185:582): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:37.198000 audit[4432]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.201739 kernel: audit: type=1325 audit(1719333277.198:583): table=filter:115 family=2 entries=10 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.198000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdb263bcf0 a2=0 a3=7ffdb263bcdc items=0 ppid=2604 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:37.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:37.205377 kernel: audit: type=1300 audit(1719333277.198:583): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdb263bcf0 a2=0 a3=7ffdb263bcdc items=0 ppid=2604 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:37.205411 kernel: audit: type=1327 audit(1719333277.198:583): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:37.204000 audit[4432]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.204000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb263bcf0 a2=0 a3=7ffdb263bcdc items=0 ppid=2604 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:37.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:37.207735 kernel: audit: type=1325 audit(1719333277.204:584): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:37.248534 kubelet[2425]: I0625 16:34:37.248514 2425 topology_manager.go:215] "Topology Admit Handler" podUID="1634bf3f-4db3-485f-ab86-9c572b85aeb1" podNamespace="calico-apiserver" podName="calico-apiserver-5c58479c75-5wzx5" Jun 25 16:34:37.257327 kubelet[2425]: I0625 16:34:37.257317 2425 topology_manager.go:215] "Topology Admit Handler" podUID="13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b" podNamespace="calico-apiserver" podName="calico-apiserver-5c58479c75-zzt7l" Jun 25 16:34:37.282658 systemd[1]: Created slice kubepods-besteffort-pod1634bf3f_4db3_485f_ab86_9c572b85aeb1.slice - libcontainer container kubepods-besteffort-pod1634bf3f_4db3_485f_ab86_9c572b85aeb1.slice. Jun 25 16:34:37.284256 systemd[1]: Created slice kubepods-besteffort-pod13c64d0f_bcc5_48cf_b69b_9c6f2fd1c93b.slice - libcontainer container kubepods-besteffort-pod13c64d0f_bcc5_48cf_b69b_9c6f2fd1c93b.slice. Jun 25 16:34:37.344932 kubelet[2425]: I0625 16:34:37.344915 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6spp\" (UniqueName: \"kubernetes.io/projected/13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b-kube-api-access-h6spp\") pod \"calico-apiserver-5c58479c75-zzt7l\" (UID: \"13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b\") " pod="calico-apiserver/calico-apiserver-5c58479c75-zzt7l" Jun 25 16:34:37.345100 kubelet[2425]: I0625 16:34:37.345091 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b-calico-apiserver-certs\") pod \"calico-apiserver-5c58479c75-zzt7l\" (UID: \"13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b\") " pod="calico-apiserver/calico-apiserver-5c58479c75-zzt7l" Jun 25 16:34:37.345161 kubelet[2425]: I0625 16:34:37.345155 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4xk\" (UniqueName: \"kubernetes.io/projected/1634bf3f-4db3-485f-ab86-9c572b85aeb1-kube-api-access-fv4xk\") pod \"calico-apiserver-5c58479c75-5wzx5\" (UID: \"1634bf3f-4db3-485f-ab86-9c572b85aeb1\") " pod="calico-apiserver/calico-apiserver-5c58479c75-5wzx5" Jun 25 16:34:37.345225 kubelet[2425]: I0625 16:34:37.345219 2425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1634bf3f-4db3-485f-ab86-9c572b85aeb1-calico-apiserver-certs\") pod \"calico-apiserver-5c58479c75-5wzx5\" (UID: \"1634bf3f-4db3-485f-ab86-9c572b85aeb1\") " pod="calico-apiserver/calico-apiserver-5c58479c75-5wzx5" Jun 25 16:34:37.445716 kubelet[2425]: E0625 16:34:37.445634 2425 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:34:37.446329 kubelet[2425]: E0625 16:34:37.445999 2425 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:34:37.449426 kubelet[2425]: E0625 16:34:37.449408 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1634bf3f-4db3-485f-ab86-9c572b85aeb1-calico-apiserver-certs podName:1634bf3f-4db3-485f-ab86-9c572b85aeb1 nodeName:}" failed. No retries permitted until 2024-06-25 16:34:37.945679205 +0000 UTC m=+49.614107250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1634bf3f-4db3-485f-ab86-9c572b85aeb1-calico-apiserver-certs") pod "calico-apiserver-5c58479c75-5wzx5" (UID: "1634bf3f-4db3-485f-ab86-9c572b85aeb1") : secret "calico-apiserver-certs" not found Jun 25 16:34:37.449529 kubelet[2425]: E0625 16:34:37.449436 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b-calico-apiserver-certs podName:13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b nodeName:}" failed. No retries permitted until 2024-06-25 16:34:37.949424252 +0000 UTC m=+49.617852293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b-calico-apiserver-certs") pod "calico-apiserver-5c58479c75-zzt7l" (UID: "13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b") : secret "calico-apiserver-certs" not found Jun 25 16:34:37.951937 kubelet[2425]: E0625 16:34:37.951916 2425 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:34:37.952403 kubelet[2425]: E0625 16:34:37.952391 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b-calico-apiserver-certs podName:13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b nodeName:}" failed. No retries permitted until 2024-06-25 16:34:38.952373837 +0000 UTC m=+50.620801881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b-calico-apiserver-certs") pod "calico-apiserver-5c58479c75-zzt7l" (UID: "13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b") : secret "calico-apiserver-certs" not found Jun 25 16:34:37.952515 kubelet[2425]: E0625 16:34:37.951944 2425 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:34:37.952597 kubelet[2425]: E0625 16:34:37.952578 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1634bf3f-4db3-485f-ab86-9c572b85aeb1-calico-apiserver-certs podName:1634bf3f-4db3-485f-ab86-9c572b85aeb1 nodeName:}" failed. No retries permitted until 2024-06-25 16:34:38.952569827 +0000 UTC m=+50.620997868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1634bf3f-4db3-485f-ab86-9c572b85aeb1-calico-apiserver-certs") pod "calico-apiserver-5c58479c75-5wzx5" (UID: "1634bf3f-4db3-485f-ab86-9c572b85aeb1") : secret "calico-apiserver-certs" not found Jun 25 16:34:39.095680 containerd[1344]: time="2024-06-25T16:34:39.095641531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c58479c75-5wzx5,Uid:1634bf3f-4db3-485f-ab86-9c572b85aeb1,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:34:39.096252 containerd[1344]: time="2024-06-25T16:34:39.095643809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c58479c75-zzt7l,Uid:13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:34:39.271750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:34:39.271817 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliea13b16a788: link becomes ready Jun 25 16:34:39.272095 systemd-networkd[1156]: caliea13b16a788: Link UP Jun 25 16:34:39.272244 systemd-networkd[1156]: caliea13b16a788: Gained carrier Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.190 [INFO][4446] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0 calico-apiserver-5c58479c75- calico-apiserver 1634bf3f-4db3-485f-ab86-9c572b85aeb1 791 0 2024-06-25 16:34:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c58479c75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c58479c75-5wzx5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliea13b16a788 [] []}} ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.190 [INFO][4446] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.223 [INFO][4465] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" HandleID="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Workload="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.231 [INFO][4465] ipam_plugin.go 264: Auto assigning IP ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" HandleID="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Workload="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c58479c75-5wzx5", "timestamp":"2024-06-25 16:34:39.223368141 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.231 [INFO][4465] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.231 [INFO][4465] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.231 [INFO][4465] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.235 [INFO][4465] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.252 [INFO][4465] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.261 [INFO][4465] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.262 [INFO][4465] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.263 [INFO][4465] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.263 [INFO][4465] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.264 [INFO][4465] ipam.go 1685: Creating new handle: k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.265 [INFO][4465] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.268 [INFO][4465] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.268 [INFO][4465] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" host="localhost" Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.268 [INFO][4465] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:39.281911 containerd[1344]: 2024-06-25 16:34:39.268 [INFO][4465] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" HandleID="k8s-pod-network.02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Workload="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" Jun 25 16:34:39.283610 containerd[1344]: 2024-06-25 16:34:39.269 [INFO][4446] k8s.go 386: Populated endpoint ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0", GenerateName:"calico-apiserver-5c58479c75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1634bf3f-4db3-485f-ab86-9c572b85aeb1", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c58479c75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c58479c75-5wzx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea13b16a788", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:39.283610 containerd[1344]: 2024-06-25 16:34:39.269 [INFO][4446] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" Jun 25 16:34:39.283610 containerd[1344]: 2024-06-25 16:34:39.269 [INFO][4446] dataplane_linux.go 68: Setting the host side veth name to caliea13b16a788 ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" Jun 25 16:34:39.283610 containerd[1344]: 2024-06-25 16:34:39.273 [INFO][4446] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" Jun 25 16:34:39.283610 containerd[1344]: 2024-06-25 16:34:39.273 [INFO][4446] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0", GenerateName:"calico-apiserver-5c58479c75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1634bf3f-4db3-485f-ab86-9c572b85aeb1", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c58479c75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd", Pod:"calico-apiserver-5c58479c75-5wzx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea13b16a788", MAC:"9a:92:d4:0c:c7:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:39.283610 containerd[1344]: 2024-06-25 16:34:39.279 [INFO][4446] k8s.go 500: Wrote updated endpoint to datastore ContainerID="02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-5wzx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--5wzx5-eth0" Jun 25 16:34:39.303194 systemd-networkd[1156]: cali596927fbe57: Link UP Jun 25 16:34:39.304466 systemd-networkd[1156]: cali596927fbe57: Gained carrier Jun 25 16:34:39.304787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali596927fbe57: link becomes ready Jun 25 16:34:39.306000 audit[4494]: NETFILTER_CFG table=filter:117 family=2 entries=55 op=nft_register_chain pid=4494 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:39.306000 audit[4494]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffd5d5f5830 a2=0 a3=7ffd5d5f581c items=0 ppid=3590 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.306000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.180 [INFO][4440] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0 calico-apiserver-5c58479c75- calico-apiserver 13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b 790 0 2024-06-25 16:34:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c58479c75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c58479c75-zzt7l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali596927fbe57 [] []}} ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.180 [INFO][4440] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.235 [INFO][4464] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" HandleID="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Workload="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.259 [INFO][4464] ipam_plugin.go 264: Auto assigning IP ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" HandleID="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Workload="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c58479c75-zzt7l", "timestamp":"2024-06-25 16:34:39.235154759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.259 [INFO][4464] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.268 [INFO][4464] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.268 [INFO][4464] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.275 [INFO][4464] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.286 [INFO][4464] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.290 [INFO][4464] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.291 [INFO][4464] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.292 [INFO][4464] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.292 [INFO][4464] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.293 [INFO][4464] ipam.go 1685: Creating new handle: k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.296 [INFO][4464] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.298 [INFO][4464] ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.299 [INFO][4464] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" host="localhost" Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.299 [INFO][4464] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:39.313237 containerd[1344]: 2024-06-25 16:34:39.299 [INFO][4464] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" HandleID="k8s-pod-network.b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Workload="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" Jun 25 16:34:39.313959 containerd[1344]: 2024-06-25 16:34:39.300 [INFO][4440] k8s.go 386: Populated endpoint ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0", GenerateName:"calico-apiserver-5c58479c75-", Namespace:"calico-apiserver", SelfLink:"", UID:"13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c58479c75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c58479c75-zzt7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali596927fbe57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:39.313959 containerd[1344]: 2024-06-25 16:34:39.300 [INFO][4440] k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" Jun 25 16:34:39.313959 containerd[1344]: 2024-06-25 16:34:39.300 [INFO][4440] dataplane_linux.go 68: Setting the host side veth name to cali596927fbe57 ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" Jun 25 16:34:39.313959 containerd[1344]: 2024-06-25 16:34:39.303 [INFO][4440] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" Jun 25 16:34:39.313959 containerd[1344]: 2024-06-25 16:34:39.306 [INFO][4440] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0", GenerateName:"calico-apiserver-5c58479c75-", Namespace:"calico-apiserver", SelfLink:"", UID:"13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c58479c75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d", Pod:"calico-apiserver-5c58479c75-zzt7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali596927fbe57", MAC:"7a:94:6f:56:85:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:39.313959 containerd[1344]: 2024-06-25 16:34:39.311 [INFO][4440] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5c58479c75-zzt7l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c58479c75--zzt7l-eth0" Jun 25 16:34:39.326000 audit[4525]: NETFILTER_CFG table=filter:118 family=2 entries=49 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:34:39.326000 audit[4525]: SYSCALL arch=c000003e syscall=46 success=yes exit=24300 a0=3 a1=7fffb6c4d5d0 a2=0 a3=7fffb6c4d5bc items=0 ppid=3590 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.326000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:34:39.334444 containerd[1344]: time="2024-06-25T16:34:39.334408909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:39.334584 containerd[1344]: time="2024-06-25T16:34:39.334570578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:39.334759 containerd[1344]: time="2024-06-25T16:34:39.334715671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:39.334823 containerd[1344]: time="2024-06-25T16:34:39.334810898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:39.342077 containerd[1344]: time="2024-06-25T16:34:39.341963484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:34:39.342077 containerd[1344]: time="2024-06-25T16:34:39.341992311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:39.342374 containerd[1344]: time="2024-06-25T16:34:39.342001906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:34:39.342374 containerd[1344]: time="2024-06-25T16:34:39.342312278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:34:39.348838 systemd[1]: Started cri-containerd-02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd.scope - libcontainer container 02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd. Jun 25 16:34:39.354452 systemd[1]: Started cri-containerd-b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d.scope - libcontainer container b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d. Jun 25 16:34:39.356000 audit: BPF prog-id=170 op=LOAD Jun 25 16:34:39.357000 audit: BPF prog-id=171 op=LOAD Jun 25 16:34:39.357000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4524 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656535383962633238343636333933373333623231333563663435 Jun 25 16:34:39.357000 audit: BPF prog-id=172 op=LOAD Jun 25 16:34:39.357000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4524 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656535383962633238343636333933373333623231333563663435 Jun 25 16:34:39.357000 audit: BPF prog-id=172 op=UNLOAD Jun 25 16:34:39.357000 audit: BPF prog-id=171 op=UNLOAD Jun 25 16:34:39.357000 audit: BPF prog-id=173 op=LOAD Jun 25 16:34:39.357000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4524 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656535383962633238343636333933373333623231333563663435 Jun 25 16:34:39.359482 systemd-resolved[1285]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:34:39.360000 audit: BPF prog-id=174 op=LOAD Jun 25 16:34:39.361000 audit: BPF prog-id=175 op=LOAD Jun 25 16:34:39.361000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4526 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323330666262623135613038343135396137333037366632633731 Jun 25 16:34:39.361000 audit: BPF prog-id=176 op=LOAD Jun 25 16:34:39.361000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4526 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323330666262623135613038343135396137333037366632633731 Jun 25 16:34:39.361000 audit: BPF prog-id=176 op=UNLOAD Jun 25 16:34:39.361000 audit: BPF prog-id=175 op=UNLOAD Jun 25 16:34:39.361000 audit: BPF prog-id=177 op=LOAD Jun 25 16:34:39.361000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4526 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:39.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323330666262623135613038343135396137333037366632633731 Jun 25 16:34:39.363874 systemd-resolved[1285]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:34:39.386440 containerd[1344]: time="2024-06-25T16:34:39.386412499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c58479c75-5wzx5,Uid:1634bf3f-4db3-485f-ab86-9c572b85aeb1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd\"" Jun 25 16:34:39.389365 containerd[1344]: time="2024-06-25T16:34:39.388295906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:34:39.390720 containerd[1344]: time="2024-06-25T16:34:39.390703821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c58479c75-zzt7l,Uid:13c64d0f-bcc5-48cf-b69b-9c6f2fd1c93b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d\"" Jun 25 16:34:40.839839 systemd-networkd[1156]: cali596927fbe57: Gained IPv6LL Jun 25 16:34:40.967836 systemd-networkd[1156]: caliea13b16a788: Gained IPv6LL Jun 25 16:34:41.228876 containerd[1344]: time="2024-06-25T16:34:41.228805180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.229455 containerd[1344]: time="2024-06-25T16:34:41.229437787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:34:41.229719 containerd[1344]: time="2024-06-25T16:34:41.229703746Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.230622 containerd[1344]: time="2024-06-25T16:34:41.230608022Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.231418 containerd[1344]: time="2024-06-25T16:34:41.231401443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.231950 containerd[1344]: time="2024-06-25T16:34:41.231932136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 1.842565092s" Jun 25 16:34:41.231988 containerd[1344]: time="2024-06-25T16:34:41.231951467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:34:41.232662 containerd[1344]: time="2024-06-25T16:34:41.232539029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:34:41.234355 containerd[1344]: time="2024-06-25T16:34:41.234195395Z" level=info msg="CreateContainer within sandbox \"02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:34:41.241043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631432591.mount: Deactivated successfully. Jun 25 16:34:41.255888 containerd[1344]: time="2024-06-25T16:34:41.255867398Z" level=info msg="CreateContainer within sandbox \"02ee589bc28466393733b2135cf452f0e8d159c7c1cf16d051eff23891c47abd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"022549e58d4afbb37c888384069dfa0d4ee792f2e2cc64a72ded2eb105e381a4\"" Jun 25 16:34:41.257075 containerd[1344]: time="2024-06-25T16:34:41.257038786Z" level=info msg="StartContainer for \"022549e58d4afbb37c888384069dfa0d4ee792f2e2cc64a72ded2eb105e381a4\"" Jun 25 16:34:41.275891 systemd[1]: Started cri-containerd-022549e58d4afbb37c888384069dfa0d4ee792f2e2cc64a72ded2eb105e381a4.scope - libcontainer container 022549e58d4afbb37c888384069dfa0d4ee792f2e2cc64a72ded2eb105e381a4. Jun 25 16:34:41.287000 audit: BPF prog-id=178 op=LOAD Jun 25 16:34:41.287000 audit: BPF prog-id=179 op=LOAD Jun 25 16:34:41.287000 audit[4608]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4524 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032323534396535386434616662623337633838383338343036396466 Jun 25 16:34:41.287000 audit: BPF prog-id=180 op=LOAD Jun 25 16:34:41.287000 audit[4608]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4524 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032323534396535386434616662623337633838383338343036396466 Jun 25 16:34:41.287000 audit: BPF prog-id=180 op=UNLOAD Jun 25 16:34:41.287000 audit: BPF prog-id=179 op=UNLOAD Jun 25 16:34:41.287000 audit: BPF prog-id=181 op=LOAD Jun 25 16:34:41.287000 audit[4608]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4524 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032323534396535386434616662623337633838383338343036396466 Jun 25 16:34:41.310741 containerd[1344]: time="2024-06-25T16:34:41.310709383Z" level=info msg="StartContainer for \"022549e58d4afbb37c888384069dfa0d4ee792f2e2cc64a72ded2eb105e381a4\" returns successfully" Jun 25 16:34:41.611817 containerd[1344]: time="2024-06-25T16:34:41.611785023Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.621146 containerd[1344]: time="2024-06-25T16:34:41.616115963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 16:34:41.625958 containerd[1344]: time="2024-06-25T16:34:41.625941580Z" level=info msg="ImageUpdate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.636414 containerd[1344]: time="2024-06-25T16:34:41.636392996Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.644852 containerd[1344]: time="2024-06-25T16:34:41.644826114Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:34:41.645778 containerd[1344]: time="2024-06-25T16:34:41.645753167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 413.196441ms" Jun 25 16:34:41.645820 containerd[1344]: time="2024-06-25T16:34:41.645778540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:34:41.647029 containerd[1344]: time="2024-06-25T16:34:41.647002194Z" level=info msg="CreateContainer within sandbox \"b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:34:41.697896 containerd[1344]: time="2024-06-25T16:34:41.697868109Z" level=info msg="CreateContainer within sandbox \"b5230fbbb15a084159a73076f2c71162b062b6466d826621b6bed08e1e5b5f5d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32db18ebfaf297752f797d2fe3fbad0e716a408c7bcaa905eee459217bd90c3d\"" Jun 25 16:34:41.698670 containerd[1344]: time="2024-06-25T16:34:41.698649787Z" level=info msg="StartContainer for \"32db18ebfaf297752f797d2fe3fbad0e716a408c7bcaa905eee459217bd90c3d\"" Jun 25 16:34:41.715000 audit[4658]: NETFILTER_CFG table=filter:119 family=2 entries=10 op=nft_register_rule pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:41.715000 audit[4658]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdf48ca940 a2=0 a3=7ffdf48ca92c items=0 ppid=2604 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:41.717834 systemd[1]: Started cri-containerd-32db18ebfaf297752f797d2fe3fbad0e716a408c7bcaa905eee459217bd90c3d.scope - libcontainer container 32db18ebfaf297752f797d2fe3fbad0e716a408c7bcaa905eee459217bd90c3d. Jun 25 16:34:41.717000 audit[4658]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:41.717000 audit[4658]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf48ca940 a2=0 a3=7ffdf48ca92c items=0 ppid=2604 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:41.722000 audit: BPF prog-id=182 op=LOAD Jun 25 16:34:41.723000 audit: BPF prog-id=183 op=LOAD Jun 25 16:34:41.723000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4526 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646231386562666166323937373532663739376432666533666261 Jun 25 16:34:41.723000 audit: BPF prog-id=184 op=LOAD Jun 25 16:34:41.723000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4526 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646231386562666166323937373532663739376432666533666261 Jun 25 16:34:41.723000 audit: BPF prog-id=184 op=UNLOAD Jun 25 16:34:41.723000 audit: BPF prog-id=183 op=UNLOAD Jun 25 16:34:41.723000 audit: BPF prog-id=185 op=LOAD Jun 25 16:34:41.723000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4526 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:41.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646231386562666166323937373532663739376432666533666261 Jun 25 16:34:41.744490 containerd[1344]: time="2024-06-25T16:34:41.744464409Z" level=info msg="StartContainer for \"32db18ebfaf297752f797d2fe3fbad0e716a408c7bcaa905eee459217bd90c3d\" returns successfully" Jun 25 16:34:42.712207 kubelet[2425]: I0625 16:34:42.712182 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c58479c75-zzt7l" podStartSLOduration=3.460393883 podCreationTimestamp="2024-06-25 16:34:37 +0000 UTC" firstStartedPulling="2024-06-25 16:34:39.394207113 +0000 UTC m=+51.062635149" lastFinishedPulling="2024-06-25 16:34:41.645971076 +0000 UTC m=+53.314399115" observedRunningTime="2024-06-25 16:34:42.711923801 +0000 UTC m=+54.380351854" watchObservedRunningTime="2024-06-25 16:34:42.712157849 +0000 UTC m=+54.380585889" Jun 25 16:34:42.712485 kubelet[2425]: I0625 16:34:42.712328 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c58479c75-5wzx5" podStartSLOduration=3.86738653 podCreationTimestamp="2024-06-25 16:34:37 +0000 UTC" firstStartedPulling="2024-06-25 16:34:39.387221083 +0000 UTC m=+51.055649118" lastFinishedPulling="2024-06-25 16:34:41.232150512 +0000 UTC m=+52.900578545" observedRunningTime="2024-06-25 16:34:41.700043716 +0000 UTC m=+53.368471763" watchObservedRunningTime="2024-06-25 16:34:42.712315957 +0000 UTC m=+54.380743996" Jun 25 16:34:42.745153 kubelet[2425]: I0625 16:34:42.745126 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:34:42.858905 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:34:42.864850 kernel: audit: type=1325 audit(1719333282.854:613): table=filter:121 family=2 entries=10 op=nft_register_rule pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:42.864890 kernel: audit: type=1300 audit(1719333282.854:613): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc70beab20 a2=0 a3=7ffc70beab0c items=0 ppid=2604 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:42.864958 kernel: audit: type=1327 audit(1719333282.854:613): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:42.864977 kernel: audit: type=1325 audit(1719333282.861:614): table=nat:122 family=2 entries=20 op=nft_register_rule pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:42.854000 audit[4681]: NETFILTER_CFG table=filter:121 family=2 entries=10 op=nft_register_rule pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:42.854000 audit[4681]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc70beab20 a2=0 a3=7ffc70beab0c items=0 ppid=2604 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:42.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:42.861000 audit[4681]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:34:42.861000 audit[4681]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc70beab20 a2=0 a3=7ffc70beab0c items=0 ppid=2604 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:42.866106 kernel: audit: type=1300 audit(1719333282.861:614): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc70beab20 a2=0 a3=7ffc70beab0c items=0 ppid=2604 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:42.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:42.869775 kernel: audit: type=1327 audit(1719333282.861:614): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:34:43.701277 kubelet[2425]: I0625 16:34:43.701249 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:34:44.686000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:44.686000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001d12630 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:44.689952 kernel: audit: type=1400 audit(1719333284.686:615): avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:44.689990 kernel: audit: type=1300 audit(1719333284.686:615): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001d12630 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:44.686000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:44.694095 kernel: audit: type=1327 audit(1719333284.686:615): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:44.688000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:44.696017 kernel: audit: type=1400 audit(1719333284.688:616): avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:44.688000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000705a20 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:34:44.688000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:45.564000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=1041980 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:45.564000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:45.564000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c0074986c0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:34:45.564000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:34:45.564000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00bdd1d40 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:34:45.564000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:34:45.570000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:45.570000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=1041986 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:45.570000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c0074987e0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:34:45.570000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:34:45.570000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:45.570000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00749afa0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:34:45.570000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:34:45.570000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c00bdd1e60 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:34:45.570000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:34:45.570000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:45.570000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c007511ca0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:34:45.570000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:34:48.513485 containerd[1344]: time="2024-06-25T16:34:48.513438048Z" level=info msg="StopPodSandbox for \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\"" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.622 [WARNING][4714] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z6zw6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5006e-233c-4549-81b3-ec063a911736", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b", Pod:"csi-node-driver-z6zw6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali37d3952c27c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.623 [INFO][4714] k8s.go 608: Cleaning up netns ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.623 [INFO][4714] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" iface="eth0" netns="" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.623 [INFO][4714] k8s.go 615: Releasing IP address(es) ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.623 [INFO][4714] utils.go 188: Calico CNI releasing IP address ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.638 [INFO][4720] ipam_plugin.go 411: Releasing address using handleID ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.638 [INFO][4720] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.638 [INFO][4720] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.641 [WARNING][4720] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.641 [INFO][4720] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.642 [INFO][4720] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:48.644721 containerd[1344]: 2024-06-25 16:34:48.643 [INFO][4714] k8s.go 621: Teardown processing complete. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.645403 containerd[1344]: time="2024-06-25T16:34:48.644765115Z" level=info msg="TearDown network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\" successfully" Jun 25 16:34:48.645403 containerd[1344]: time="2024-06-25T16:34:48.644785010Z" level=info msg="StopPodSandbox for \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\" returns successfully" Jun 25 16:34:48.680784 containerd[1344]: time="2024-06-25T16:34:48.680756049Z" level=info msg="RemovePodSandbox for \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\"" Jun 25 16:34:48.692789 containerd[1344]: time="2024-06-25T16:34:48.683243961Z" level=info msg="Forcibly stopping sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\"" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.717 [WARNING][4738] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z6zw6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5006e-233c-4549-81b3-ec063a911736", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99f0be48fc1b2020387ab2a498950e7059da2f52ec0e85b003ebf80b7082a1b", Pod:"csi-node-driver-z6zw6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali37d3952c27c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.717 [INFO][4738] k8s.go 608: Cleaning up netns ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.717 [INFO][4738] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" iface="eth0" netns="" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.718 [INFO][4738] k8s.go 615: Releasing IP address(es) ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.718 [INFO][4738] utils.go 188: Calico CNI releasing IP address ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.732 [INFO][4745] ipam_plugin.go 411: Releasing address using handleID ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.732 [INFO][4745] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.732 [INFO][4745] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.736 [WARNING][4745] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.736 [INFO][4745] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" HandleID="k8s-pod-network.7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Workload="localhost-k8s-csi--node--driver--z6zw6-eth0" Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.737 [INFO][4745] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:48.739458 containerd[1344]: 2024-06-25 16:34:48.738 [INFO][4738] k8s.go 621: Teardown processing complete. ContainerID="7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6" Jun 25 16:34:48.740251 containerd[1344]: time="2024-06-25T16:34:48.739842640Z" level=info msg="TearDown network for sandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\" successfully" Jun 25 16:34:48.756112 containerd[1344]: time="2024-06-25T16:34:48.756096286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:48.762878 containerd[1344]: time="2024-06-25T16:34:48.762863270Z" level=info msg="RemovePodSandbox \"7e09b030a2e879f3548d90bfbf7367ca4478d6814c42caf351d46faf2f2bfcd6\" returns successfully" Jun 25 16:34:48.763298 containerd[1344]: time="2024-06-25T16:34:48.763286460Z" level=info msg="StopPodSandbox for \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\"" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.785 [WARNING][4765] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--glphj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d80999f6-e8ab-41e9-805a-94aad5dbb65d", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5", Pod:"coredns-5dd5756b68-glphj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43f37aceffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.785 [INFO][4765] k8s.go 608: Cleaning up netns ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.785 [INFO][4765] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" iface="eth0" netns="" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.785 [INFO][4765] k8s.go 615: Releasing IP address(es) ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.785 [INFO][4765] utils.go 188: Calico CNI releasing IP address ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.798 [INFO][4771] ipam_plugin.go 411: Releasing address using handleID ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.798 [INFO][4771] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.798 [INFO][4771] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.801 [WARNING][4771] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.801 [INFO][4771] ipam_plugin.go 439: Releasing address using workloadID ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.802 [INFO][4771] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:48.805274 containerd[1344]: 2024-06-25 16:34:48.803 [INFO][4765] k8s.go 621: Teardown processing complete. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.805610 containerd[1344]: time="2024-06-25T16:34:48.805302882Z" level=info msg="TearDown network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\" successfully" Jun 25 16:34:48.805610 containerd[1344]: time="2024-06-25T16:34:48.805324364Z" level=info msg="StopPodSandbox for \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\" returns successfully" Jun 25 16:34:48.805709 containerd[1344]: time="2024-06-25T16:34:48.805692217Z" level=info msg="RemovePodSandbox for \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\"" Jun 25 16:34:48.805745 containerd[1344]: time="2024-06-25T16:34:48.805717272Z" level=info msg="Forcibly stopping sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\"" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.828 [WARNING][4789] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--glphj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d80999f6-e8ab-41e9-805a-94aad5dbb65d", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"040ce8df6befa17c98946e9f535063558ceff865819123151591e8a8d07135c5", Pod:"coredns-5dd5756b68-glphj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43f37aceffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.828 [INFO][4789] k8s.go 608: Cleaning up netns ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.828 [INFO][4789] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" iface="eth0" netns="" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.828 [INFO][4789] k8s.go 615: Releasing IP address(es) ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.828 [INFO][4789] utils.go 188: Calico CNI releasing IP address ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.844 [INFO][4796] ipam_plugin.go 411: Releasing address using handleID ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.844 [INFO][4796] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.844 [INFO][4796] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.848 [WARNING][4796] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.848 [INFO][4796] ipam_plugin.go 439: Releasing address using workloadID ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" HandleID="k8s-pod-network.976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Workload="localhost-k8s-coredns--5dd5756b68--glphj-eth0" Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.849 [INFO][4796] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:48.851000 containerd[1344]: 2024-06-25 16:34:48.850 [INFO][4789] k8s.go 621: Teardown processing complete. ContainerID="976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a" Jun 25 16:34:48.851586 containerd[1344]: time="2024-06-25T16:34:48.851035441Z" level=info msg="TearDown network for sandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\" successfully" Jun 25 16:34:48.861600 containerd[1344]: time="2024-06-25T16:34:48.861577501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:48.861719 containerd[1344]: time="2024-06-25T16:34:48.861706333Z" level=info msg="RemovePodSandbox \"976f0a4ee6bba16ad598f215fe2738d934089b78bc477dabaf9df43b33dca34a\" returns successfully" Jun 25 16:34:48.862175 containerd[1344]: time="2024-06-25T16:34:48.862155685Z" level=info msg="StopPodSandbox for \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\"" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.884 [WARNING][4815] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0", GenerateName:"calico-kube-controllers-678cc79c9f-", Namespace:"calico-system", SelfLink:"", UID:"12e884cf-b0a0-408e-9cda-1efd4981caa7", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678cc79c9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc", Pod:"calico-kube-controllers-678cc79c9f-99468", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77d959bf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.884 [INFO][4815] k8s.go 608: Cleaning up netns ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.884 [INFO][4815] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" iface="eth0" netns="" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.884 [INFO][4815] k8s.go 615: Releasing IP address(es) ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.884 [INFO][4815] utils.go 188: Calico CNI releasing IP address ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.898 [INFO][4821] ipam_plugin.go 411: Releasing address using handleID ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.898 [INFO][4821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.898 [INFO][4821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.901 [WARNING][4821] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.901 [INFO][4821] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.902 [INFO][4821] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:48.904468 containerd[1344]: 2024-06-25 16:34:48.903 [INFO][4815] k8s.go 621: Teardown processing complete. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.905602 containerd[1344]: time="2024-06-25T16:34:48.904489496Z" level=info msg="TearDown network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\" successfully" Jun 25 16:34:48.905602 containerd[1344]: time="2024-06-25T16:34:48.904554570Z" level=info msg="StopPodSandbox for \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\" returns successfully" Jun 25 16:34:48.905602 containerd[1344]: time="2024-06-25T16:34:48.904985386Z" level=info msg="RemovePodSandbox for \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\"" Jun 25 16:34:48.905602 containerd[1344]: time="2024-06-25T16:34:48.905003931Z" level=info msg="Forcibly stopping sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\"" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.933 [WARNING][4840] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0", GenerateName:"calico-kube-controllers-678cc79c9f-", Namespace:"calico-system", SelfLink:"", UID:"12e884cf-b0a0-408e-9cda-1efd4981caa7", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678cc79c9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccb0238c76f5b3e9dc06b98943b2973f09a5ad7324d502da637faaf85e1d0edc", Pod:"calico-kube-controllers-678cc79c9f-99468", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77d959bf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.933 [INFO][4840] k8s.go 608: Cleaning up netns ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.933 [INFO][4840] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" iface="eth0" netns="" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.933 [INFO][4840] k8s.go 615: Releasing IP address(es) ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.933 [INFO][4840] utils.go 188: Calico CNI releasing IP address ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.946 [INFO][4848] ipam_plugin.go 411: Releasing address using handleID ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.946 [INFO][4848] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.946 [INFO][4848] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.949 [WARNING][4848] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.950 [INFO][4848] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" HandleID="k8s-pod-network.cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Workload="localhost-k8s-calico--kube--controllers--678cc79c9f--99468-eth0" Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.951 [INFO][4848] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:48.953600 containerd[1344]: 2024-06-25 16:34:48.952 [INFO][4840] k8s.go 621: Teardown processing complete. ContainerID="cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd" Jun 25 16:34:48.953937 containerd[1344]: time="2024-06-25T16:34:48.953621812Z" level=info msg="TearDown network for sandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\" successfully" Jun 25 16:34:48.954954 containerd[1344]: time="2024-06-25T16:34:48.954937608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:48.954996 containerd[1344]: time="2024-06-25T16:34:48.954972439Z" level=info msg="RemovePodSandbox \"cf85156ec752b8888a38e084998ecaef7399a0c3ea5c3af59fd511db7b4141cd\" returns successfully" Jun 25 16:34:48.955330 containerd[1344]: time="2024-06-25T16:34:48.955267005Z" level=info msg="StopPodSandbox for \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\"" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.976 [WARNING][4866] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--g4hfm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b19c0974-d630-40fb-96d8-9884cf02a803", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483", Pod:"coredns-5dd5756b68-g4hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02578a52c66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.976 [INFO][4866] k8s.go 608: Cleaning up netns ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.976 [INFO][4866] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" iface="eth0" netns="" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.976 [INFO][4866] k8s.go 615: Releasing IP address(es) ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.976 [INFO][4866] utils.go 188: Calico CNI releasing IP address ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.989 [INFO][4872] ipam_plugin.go 411: Releasing address using handleID ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.989 [INFO][4872] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.989 [INFO][4872] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.992 [WARNING][4872] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.992 [INFO][4872] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.993 [INFO][4872] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:48.995374 containerd[1344]: 2024-06-25 16:34:48.994 [INFO][4866] k8s.go 621: Teardown processing complete. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:48.995749 containerd[1344]: time="2024-06-25T16:34:48.995714795Z" level=info msg="TearDown network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\" successfully" Jun 25 16:34:48.995799 containerd[1344]: time="2024-06-25T16:34:48.995787695Z" level=info msg="StopPodSandbox for \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\" returns successfully" Jun 25 16:34:48.996127 containerd[1344]: time="2024-06-25T16:34:48.996114834Z" level=info msg="RemovePodSandbox for \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\"" Jun 25 16:34:48.996186 containerd[1344]: time="2024-06-25T16:34:48.996165494Z" level=info msg="Forcibly stopping sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\"" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.020 [WARNING][4890] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--g4hfm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b19c0974-d630-40fb-96d8-9884cf02a803", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26edf33bfc9a445da5ad81a81e640763d188d2a50d5e78305ae3134a03449483", Pod:"coredns-5dd5756b68-g4hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02578a52c66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.021 [INFO][4890] k8s.go 608: Cleaning up netns ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.021 [INFO][4890] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" iface="eth0" netns="" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.021 [INFO][4890] k8s.go 615: Releasing IP address(es) ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.021 [INFO][4890] utils.go 188: Calico CNI releasing IP address ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.034 [INFO][4896] ipam_plugin.go 411: Releasing address using handleID ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.034 [INFO][4896] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.034 [INFO][4896] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.037 [WARNING][4896] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.037 [INFO][4896] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" HandleID="k8s-pod-network.6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Workload="localhost-k8s-coredns--5dd5756b68--g4hfm-eth0" Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.038 [INFO][4896] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:34:49.040147 containerd[1344]: 2024-06-25 16:34:49.039 [INFO][4890] k8s.go 621: Teardown processing complete. ContainerID="6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52" Jun 25 16:34:49.040591 containerd[1344]: time="2024-06-25T16:34:49.040168260Z" level=info msg="TearDown network for sandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\" successfully" Jun 25 16:34:49.041381 containerd[1344]: time="2024-06-25T16:34:49.041366420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:34:49.041416 containerd[1344]: time="2024-06-25T16:34:49.041398850Z" level=info msg="RemovePodSandbox \"6254e38ca6ac4b6292c19a9e9a1d79771bdd602a4638e78122707266225e6a52\" returns successfully" Jun 25 16:34:53.712871 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:55272.service - OpenSSH per-connection server daemon (139.178.68.195:55272). Jun 25 16:34:53.716033 kernel: kauditd_printk_skb: 20 callbacks suppressed Jun 25 16:34:53.716076 kernel: audit: type=1130 audit(1719333293.712:623): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:55272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:53.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:55272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:53.783905 sshd[4909]: Accepted publickey for core from 139.178.68.195 port 55272 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:34:53.789545 kernel: audit: type=1101 audit(1719333293.783:624): pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:53.789581 kernel: audit: type=1103 audit(1719333293.786:625): pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:53.789602 kernel: audit: type=1006 audit(1719333293.786:626): pid=4909 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:34:53.783000 audit[4909]: USER_ACCT pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:53.786000 audit[4909]: CRED_ACQ pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:53.786000 audit[4909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe19f6caa0 a2=3 a3=7f3bde949480 items=0 ppid=1 pid=4909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:53.793815 kernel: audit: type=1300 audit(1719333293.786:626): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe19f6caa0 a2=3 a3=7f3bde949480 items=0 ppid=1 pid=4909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:53.793846 kernel: audit: type=1327 audit(1719333293.786:626): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:53.786000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:53.794930 sshd[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:53.804860 systemd-logind[1326]: New session 10 of user core. Jun 25 16:34:53.808815 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:34:53.811000 audit[4909]: USER_START pid=4909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:53.813000 audit[4911]: CRED_ACQ pid=4911 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:53.816233 kernel: audit: type=1105 audit(1719333293.811:627): pid=4909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:53.816265 kernel: audit: type=1103 audit(1719333293.813:628): pid=4911 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:54.212570 sshd[4909]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:54.212000 audit[4909]: USER_END pid=4909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:54.215847 kernel: audit: type=1106 audit(1719333294.212:629): pid=4909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:54.215882 kernel: audit: type=1104 audit(1719333294.212:630): pid=4909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:54.212000 audit[4909]: CRED_DISP pid=4909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:54.219204 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:55272.service: Deactivated successfully. Jun 25 16:34:54.219394 systemd-logind[1326]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:34:54.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:55272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:54.219723 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:34:54.220318 systemd-logind[1326]: Removed session 10. Jun 25 16:34:58.167646 systemd[1]: run-containerd-runc-k8s.io-f47c1b9aee63ea9831b449233627be9b80bd5b212f2102522d75dc565747eabb-runc.4ewzHn.mount: Deactivated successfully. Jun 25 16:34:58.400674 systemd[1]: run-containerd-runc-k8s.io-b6f0aefe61be1ca5b0ac2a5056a687fd5aaaf4e3201729e492e91e98ea2a0e03-runc.CMqcBz.mount: Deactivated successfully. Jun 25 16:34:59.221955 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:34:59.222033 kernel: audit: type=1130 audit(1719333299.220:632): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:57568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:59.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:57568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:59.221514 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:57568.service - OpenSSH per-connection server daemon (139.178.68.195:57568). Jun 25 16:34:59.317000 audit[4988]: USER_ACCT pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.318745 sshd[4988]: Accepted publickey for core from 139.178.68.195 port 57568 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:34:59.320749 kernel: audit: type=1101 audit(1719333299.317:633): pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.320000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.321695 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:34:59.322860 kernel: audit: type=1103 audit(1719333299.320:634): pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.322891 kernel: audit: type=1006 audit(1719333299.320:635): pid=4988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:34:59.320000 audit[4988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc76fde860 a2=3 a3=7f9e3aa7d480 items=0 ppid=1 pid=4988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:59.326094 kernel: audit: type=1300 audit(1719333299.320:635): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc76fde860 a2=3 a3=7f9e3aa7d480 items=0 ppid=1 pid=4988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:59.326124 kernel: audit: type=1327 audit(1719333299.320:635): proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:59.320000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:34:59.326529 systemd-logind[1326]: New session 11 of user core. Jun 25 16:34:59.328896 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:34:59.331000 audit[4988]: USER_START pid=4988 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.336629 kernel: audit: type=1105 audit(1719333299.331:636): pid=4988 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.336665 kernel: audit: type=1103 audit(1719333299.332:637): pid=4992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.332000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.455180 sshd[4988]: pam_unix(sshd:session): session closed for user core Jun 25 16:34:59.455000 audit[4988]: USER_END pid=4988 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.457030 systemd-logind[1326]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:34:59.455000 audit[4988]: CRED_DISP pid=4988 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.457822 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:57568.service: Deactivated successfully. Jun 25 16:34:59.458298 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:34:59.459185 systemd-logind[1326]: Removed session 11. Jun 25 16:34:59.459975 kernel: audit: type=1106 audit(1719333299.455:638): pid=4988 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.460043 kernel: audit: type=1104 audit(1719333299.455:639): pid=4988 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:34:59.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:57568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:00.729000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:00.729000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:00.729000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00214b680 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:35:00.729000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:35:00.729000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0025beac0 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:35:00.729000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:35:00.732000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:00.732000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00214b6a0 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:35:00.732000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:35:00.747000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:00.747000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0025bec60 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:35:00.747000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:35:04.469745 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:35:04.469806 kernel: audit: type=1130 audit(1719333304.467:645): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:57578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:04.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:57578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:04.467716 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:57578.service - OpenSSH per-connection server daemon (139.178.68.195:57578). Jun 25 16:35:04.518000 audit[5007]: USER_ACCT pid=5007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.519856 sshd[5007]: Accepted publickey for core from 139.178.68.195 port 57578 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:04.521115 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:04.519000 audit[5007]: CRED_ACQ pid=5007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.522875 kernel: audit: type=1101 audit(1719333304.518:646): pid=5007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.522911 kernel: audit: type=1103 audit(1719333304.519:647): pid=5007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.526765 kernel: audit: type=1006 audit(1719333304.519:648): pid=5007 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:35:04.519000 audit[5007]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd3f73670 a2=3 a3=7f8c4fa32480 items=0 ppid=1 pid=5007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:04.529826 kernel: audit: type=1300 audit(1719333304.519:648): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd3f73670 a2=3 a3=7f8c4fa32480 items=0 ppid=1 pid=5007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:04.529869 kernel: audit: type=1327 audit(1719333304.519:648): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:04.519000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:04.530388 systemd-logind[1326]: New session 12 of user core. Jun 25 16:35:04.532917 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:35:04.535000 audit[5007]: USER_START pid=5007 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.536000 audit[5009]: CRED_ACQ pid=5009 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.540899 kernel: audit: type=1105 audit(1719333304.535:649): pid=5007 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.540928 kernel: audit: type=1103 audit(1719333304.536:650): pid=5009 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.649334 sshd[5007]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:04.650000 audit[5007]: USER_END pid=5007 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.653741 kernel: audit: type=1106 audit(1719333304.650:651): pid=5007 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.656502 kernel: audit: type=1104 audit(1719333304.651:652): pid=5007 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.651000 audit[5007]: CRED_DISP pid=5007 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:57578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:04.655205 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:57578.service: Deactivated successfully. Jun 25 16:35:04.655574 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:35:04.657017 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:57580.service - OpenSSH per-connection server daemon (139.178.68.195:57580). Jun 25 16:35:04.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:57580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:04.657718 systemd-logind[1326]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:35:04.658445 systemd-logind[1326]: Removed session 12. Jun 25 16:35:04.695000 audit[5019]: USER_ACCT pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.696008 sshd[5019]: Accepted publickey for core from 139.178.68.195 port 57580 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:04.696000 audit[5019]: CRED_ACQ pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.696000 audit[5019]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc348cff30 a2=3 a3=7f3a7ef97480 items=0 ppid=1 pid=5019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:04.696000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:04.698053 sshd[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:04.700796 systemd-logind[1326]: New session 13 of user core. Jun 25 16:35:04.706838 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:35:04.710000 audit[5019]: USER_START pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.711000 audit[5021]: CRED_ACQ pid=5021 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.966420 sshd[5019]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:04.968000 audit[5019]: USER_END pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.968000 audit[5019]: CRED_DISP pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:04.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:57594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:04.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:57580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:04.971174 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:57594.service - OpenSSH per-connection server daemon (139.178.68.195:57594). Jun 25 16:35:04.971502 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:57580.service: Deactivated successfully. Jun 25 16:35:04.974129 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:35:04.975284 systemd-logind[1326]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:35:04.976913 systemd-logind[1326]: Removed session 13. Jun 25 16:35:05.005166 sshd[5028]: Accepted publickey for core from 139.178.68.195 port 57594 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:05.004000 audit[5028]: USER_ACCT pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:05.005000 audit[5028]: CRED_ACQ pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:05.005000 audit[5028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc30d125c0 a2=3 a3=7f12b4062480 items=0 ppid=1 pid=5028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:05.005000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:05.006736 sshd[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:05.010065 systemd-logind[1326]: New session 14 of user core. Jun 25 16:35:05.014823 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:35:05.016000 audit[5028]: USER_START pid=5028 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:05.017000 audit[5031]: CRED_ACQ pid=5031 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:05.117593 sshd[5028]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:05.117000 audit[5028]: USER_END pid=5028 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:05.117000 audit[5028]: CRED_DISP pid=5028 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:05.119374 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:57594.service: Deactivated successfully. Jun 25 16:35:05.119921 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:35:05.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:57594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:05.120320 systemd-logind[1326]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:35:05.120903 systemd-logind[1326]: Removed session 14. Jun 25 16:35:10.126791 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:43088.service - OpenSSH per-connection server daemon (139.178.68.195:43088). Jun 25 16:35:10.127886 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:35:10.127922 kernel: audit: type=1130 audit(1719333310.126:672): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:43088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:10.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:43088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:10.151000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.152779 sshd[5059]: Accepted publickey for core from 139.178.68.195 port 43088 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:10.154000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.155826 kernel: audit: type=1101 audit(1719333310.151:673): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.156026 kernel: audit: type=1103 audit(1719333310.154:674): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.156152 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:10.160720 kernel: audit: type=1006 audit(1719333310.154:675): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:35:10.160784 kernel: audit: type=1300 audit(1719333310.154:675): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6cef0730 a2=3 a3=7faefb96f480 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:10.154000 audit[5059]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6cef0730 a2=3 a3=7faefb96f480 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:10.154000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:10.163447 kernel: audit: type=1327 audit(1719333310.154:675): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:10.165554 systemd-logind[1326]: New session 15 of user core. Jun 25 16:35:10.171836 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:35:10.175000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.176000 audit[5061]: CRED_ACQ pid=5061 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.179029 kernel: audit: type=1105 audit(1719333310.175:676): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.179058 kernel: audit: type=1103 audit(1719333310.176:677): pid=5061 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.258545 sshd[5059]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:10.258000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.258000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.261765 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:43088.service: Deactivated successfully. Jun 25 16:35:10.262236 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:35:10.263197 kernel: audit: type=1106 audit(1719333310.258:678): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.263230 kernel: audit: type=1104 audit(1719333310.258:679): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:10.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:43088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:10.263587 systemd-logind[1326]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:35:10.264081 systemd-logind[1326]: Removed session 15. Jun 25 16:35:14.902791 kubelet[2425]: I0625 16:35:14.902763 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:35:14.935000 audit[5072]: NETFILTER_CFG table=filter:123 family=2 entries=9 op=nft_register_rule pid=5072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:14.935000 audit[5072]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc7bb45620 a2=0 a3=7ffc7bb4560c items=0 ppid=2604 pid=5072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:14.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:14.936000 audit[5072]: NETFILTER_CFG table=nat:124 family=2 entries=27 op=nft_register_chain pid=5072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:14.936000 audit[5072]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc7bb45620 a2=0 a3=7ffc7bb4560c items=0 ppid=2604 pid=5072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:14.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:15.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:43094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:15.266037 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:43094.service - OpenSSH per-connection server daemon (139.178.68.195:43094). Jun 25 16:35:15.266808 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:35:15.266850 kernel: audit: type=1130 audit(1719333315.265:683): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:43094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:15.291000 audit[5074]: USER_ACCT pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.292621 sshd[5074]: Accepted publickey for core from 139.178.68.195 port 43094 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:15.294000 audit[5074]: CRED_ACQ pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.295696 sshd[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:15.297845 kernel: audit: type=1101 audit(1719333315.291:684): pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.297890 kernel: audit: type=1103 audit(1719333315.294:685): pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.297912 kernel: audit: type=1006 audit(1719333315.294:686): pid=5074 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:35:15.299703 kernel: audit: type=1300 audit(1719333315.294:686): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd441145c0 a2=3 a3=7fb49bcc3480 items=0 ppid=1 pid=5074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:15.294000 audit[5074]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd441145c0 a2=3 a3=7fb49bcc3480 items=0 ppid=1 pid=5074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:15.301463 systemd-logind[1326]: New session 16 of user core. Jun 25 16:35:15.304068 kernel: audit: type=1327 audit(1719333315.294:686): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:15.294000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:15.303849 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:35:15.307000 audit[5074]: USER_START pid=5074 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.308000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.310921 kernel: audit: type=1105 audit(1719333315.307:687): pid=5074 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.310954 kernel: audit: type=1103 audit(1719333315.308:688): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.412538 sshd[5074]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:15.412000 audit[5074]: USER_END pid=5074 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.412000 audit[5074]: CRED_DISP pid=5074 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.417237 kernel: audit: type=1106 audit(1719333315.412:689): pid=5074 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.417274 kernel: audit: type=1104 audit(1719333315.412:690): pid=5074 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:15.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:43094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:15.417575 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:43094.service: Deactivated successfully. Jun 25 16:35:15.418064 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:35:15.418486 systemd-logind[1326]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:35:15.419027 systemd-logind[1326]: Removed session 16. Jun 25 16:35:20.419757 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.68.195:49004.service - OpenSSH per-connection server daemon (139.178.68.195:49004). Jun 25 16:35:20.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.105:22-139.178.68.195:49004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:20.421187 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:35:20.421236 kernel: audit: type=1130 audit(1719333320.419:692): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.105:22-139.178.68.195:49004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:20.463000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.465105 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 49004 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:20.463000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.467260 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:20.469540 kernel: audit: type=1101 audit(1719333320.463:693): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.469583 kernel: audit: type=1103 audit(1719333320.463:694): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.469605 kernel: audit: type=1006 audit(1719333320.463:695): pid=5095 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:35:20.471066 kernel: audit: type=1300 audit(1719333320.463:695): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe411615f0 a2=3 a3=7ff2958ee480 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:20.463000 audit[5095]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe411615f0 a2=3 a3=7ff2958ee480 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:20.463000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:20.474809 kernel: audit: type=1327 audit(1719333320.463:695): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:20.477019 systemd-logind[1326]: New session 17 of user core. Jun 25 16:35:20.479868 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:35:20.483000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.483000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.486824 kernel: audit: type=1105 audit(1719333320.483:696): pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.486853 kernel: audit: type=1103 audit(1719333320.483:697): pid=5097 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.648848 sshd[5095]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:20.650000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.650000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.655322 kernel: audit: type=1106 audit(1719333320.650:698): pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.655364 kernel: audit: type=1104 audit(1719333320.650:699): pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:20.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.105:22-139.178.68.195:49004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:20.653605 systemd[1]: sshd@14-139.178.70.105:22-139.178.68.195:49004.service: Deactivated successfully. Jun 25 16:35:20.654066 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:35:20.654708 systemd-logind[1326]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:35:20.655571 systemd-logind[1326]: Removed session 17. Jun 25 16:35:23.419544 kubelet[2425]: I0625 16:35:23.419515 2425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:35:23.452000 audit[5106]: NETFILTER_CFG table=filter:125 family=2 entries=8 op=nft_register_rule pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:23.452000 audit[5106]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcb526ad20 a2=0 a3=7ffcb526ad0c items=0 ppid=2604 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:23.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:23.453000 audit[5106]: NETFILTER_CFG table=nat:126 family=2 entries=34 op=nft_register_chain pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:23.453000 audit[5106]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffcb526ad20 a2=0 a3=7ffcb526ad0c items=0 ppid=2604 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:23.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:25.639718 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.68.195:49006.service - OpenSSH per-connection server daemon (139.178.68.195:49006). Jun 25 16:35:25.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.105:22-139.178.68.195:49006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:25.641256 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:35:25.641299 kernel: audit: type=1130 audit(1719333325.639:703): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.105:22-139.178.68.195:49006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:25.669000 audit[5108]: USER_ACCT pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.671855 sshd[5108]: Accepted publickey for core from 139.178.68.195 port 49006 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:25.670000 audit[5108]: CRED_ACQ pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.673564 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:25.675927 kernel: audit: type=1101 audit(1719333325.669:704): pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.675972 kernel: audit: type=1103 audit(1719333325.670:705): pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.677810 kernel: audit: type=1006 audit(1719333325.670:706): pid=5108 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:35:25.677852 kernel: audit: type=1300 audit(1719333325.670:706): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc88fdba90 a2=3 a3=7f1036ff6480 items=0 ppid=1 pid=5108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:25.670000 audit[5108]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc88fdba90 a2=3 a3=7f1036ff6480 items=0 ppid=1 pid=5108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:25.679577 systemd-logind[1326]: New session 18 of user core. Jun 25 16:35:25.682191 kernel: audit: type=1327 audit(1719333325.670:706): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:25.670000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:25.681855 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:35:25.685000 audit[5108]: USER_START pid=5108 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.686000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.690403 kernel: audit: type=1105 audit(1719333325.685:707): pid=5108 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.690447 kernel: audit: type=1103 audit(1719333325.686:708): pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.810516 sshd[5108]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:25.812000 audit[5108]: USER_END pid=5108 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.815203 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.68.195:49020.service - OpenSSH per-connection server daemon (139.178.68.195:49020). Jun 25 16:35:25.815756 kernel: audit: type=1106 audit(1719333325.812:709): pid=5108 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.105:22-139.178.68.195:49020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:25.817329 systemd-logind[1326]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:35:25.817746 kernel: audit: type=1130 audit(1719333325.814:710): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.105:22-139.178.68.195:49020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:25.815000 audit[5108]: CRED_DISP pid=5108 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.105:22-139.178.68.195:49006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:25.818027 systemd[1]: sshd@15-139.178.70.105:22-139.178.68.195:49006.service: Deactivated successfully. Jun 25 16:35:25.818626 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:35:25.819415 systemd-logind[1326]: Removed session 18. Jun 25 16:35:25.855000 audit[5119]: USER_ACCT pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.856873 sshd[5119]: Accepted publickey for core from 139.178.68.195 port 49020 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:25.856000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.856000 audit[5119]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1b1dbd10 a2=3 a3=7f211bc25480 items=0 ppid=1 pid=5119 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:25.856000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:25.858164 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:25.861584 systemd-logind[1326]: New session 19 of user core. Jun 25 16:35:25.871878 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:35:25.875000 audit[5119]: USER_START pid=5119 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:25.876000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:26.224498 sshd[5119]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:26.224000 audit[5119]: USER_END pid=5119 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:26.224000 audit[5119]: CRED_DISP pid=5119 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:26.230327 systemd[1]: Started sshd@17-139.178.70.105:22-139.178.68.195:49030.service - OpenSSH per-connection server daemon (139.178.68.195:49030). Jun 25 16:35:26.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.105:22-139.178.68.195:49030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:26.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.105:22-139.178.68.195:49020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:26.230692 systemd[1]: sshd@16-139.178.70.105:22-139.178.68.195:49020.service: Deactivated successfully. Jun 25 16:35:26.231311 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:35:26.231960 systemd-logind[1326]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:35:26.232915 systemd-logind[1326]: Removed session 19. Jun 25 16:35:26.270000 audit[5129]: USER_ACCT pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:26.271000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:26.271000 audit[5129]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce510faa0 a2=3 a3=7f19f8895480 items=0 ppid=1 pid=5129 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:26.271000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:26.272600 sshd[5129]: Accepted publickey for core from 139.178.68.195 port 49030 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:26.272668 sshd[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:26.275275 systemd-logind[1326]: New session 20 of user core. Jun 25 16:35:26.279920 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:35:26.283000 audit[5129]: USER_START pid=5129 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:26.284000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.530000 audit[5147]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:27.530000 audit[5147]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff86f68b00 a2=0 a3=7fff86f68aec items=0 ppid=2604 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:27.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:27.532000 audit[5147]: NETFILTER_CFG table=nat:128 family=2 entries=22 op=nft_register_rule pid=5147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:27.532000 audit[5147]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff86f68b00 a2=0 a3=0 items=0 ppid=2604 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:27.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:27.550819 sshd[5129]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:27.552000 audit[5129]: USER_END pid=5129 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.552000 audit[5129]: CRED_DISP pid=5129 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.556011 systemd[1]: Started sshd@18-139.178.70.105:22-139.178.68.195:49038.service - OpenSSH per-connection server daemon (139.178.68.195:49038). Jun 25 16:35:27.556353 systemd[1]: sshd@17-139.178.70.105:22-139.178.68.195:49030.service: Deactivated successfully. Jun 25 16:35:27.556997 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:35:27.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.105:22-139.178.68.195:49038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:27.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.105:22-139.178.68.195:49030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:27.557847 systemd-logind[1326]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:35:27.558413 systemd-logind[1326]: Removed session 20. Jun 25 16:35:27.557000 audit[5151]: NETFILTER_CFG table=filter:129 family=2 entries=32 op=nft_register_rule pid=5151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:27.557000 audit[5151]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe0e1c9eb0 a2=0 a3=7ffe0e1c9e9c items=0 ppid=2604 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:27.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:27.558000 audit[5151]: NETFILTER_CFG table=nat:130 family=2 entries=22 op=nft_register_rule pid=5151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:27.558000 audit[5151]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe0e1c9eb0 a2=0 a3=0 items=0 ppid=2604 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:27.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:27.592762 sshd[5150]: Accepted publickey for core from 139.178.68.195 port 49038 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:27.593894 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:27.590000 audit[5150]: USER_ACCT pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.591000 audit[5150]: CRED_ACQ pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.591000 audit[5150]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc17f43740 a2=3 a3=7f5a825d3480 items=0 ppid=1 pid=5150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:27.591000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:27.596808 systemd-logind[1326]: New session 21 of user core. Jun 25 16:35:27.598845 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:35:27.600000 audit[5150]: USER_START pid=5150 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.601000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.979282 sshd[5150]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:27.978000 audit[5150]: USER_END pid=5150 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.978000 audit[5150]: CRED_DISP pid=5150 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:27.986541 systemd[1]: sshd@18-139.178.70.105:22-139.178.68.195:49038.service: Deactivated successfully. Jun 25 16:35:27.987055 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:35:27.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.105:22-139.178.68.195:49038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:27.987581 systemd-logind[1326]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:35:27.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.105:22-139.178.68.195:58320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:27.992063 systemd[1]: Started sshd@19-139.178.70.105:22-139.178.68.195:58320.service - OpenSSH per-connection server daemon (139.178.68.195:58320). Jun 25 16:35:27.993620 systemd-logind[1326]: Removed session 21. Jun 25 16:35:28.031000 audit[5162]: USER_ACCT pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:28.033409 sshd[5162]: Accepted publickey for core from 139.178.68.195 port 58320 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:28.032000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:28.032000 audit[5162]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde56d8c60 a2=3 a3=7faceeade480 items=0 ppid=1 pid=5162 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:28.032000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:28.034863 sshd[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:28.037793 systemd-logind[1326]: New session 22 of user core. Jun 25 16:35:28.039875 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:35:28.041000 audit[5162]: USER_START pid=5162 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:28.042000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:28.159871 sshd[5162]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:28.159000 audit[5162]: USER_END pid=5162 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:28.159000 audit[5162]: CRED_DISP pid=5162 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:28.162474 systemd[1]: sshd@19-139.178.70.105:22-139.178.68.195:58320.service: Deactivated successfully. Jun 25 16:35:28.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.105:22-139.178.68.195:58320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:28.162988 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:35:28.163390 systemd-logind[1326]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:35:28.164143 systemd-logind[1326]: Removed session 22. Jun 25 16:35:28.240638 systemd[1]: run-containerd-runc-k8s.io-f47c1b9aee63ea9831b449233627be9b80bd5b212f2102522d75dc565747eabb-runc.i51mB7.mount: Deactivated successfully. Jun 25 16:35:29.236412 systemd[1]: run-containerd-runc-k8s.io-b6f0aefe61be1ca5b0ac2a5056a687fd5aaaf4e3201729e492e91e98ea2a0e03-runc.ySgcw6.mount: Deactivated successfully. Jun 25 16:35:32.273602 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:35:32.273698 kernel: audit: type=1325 audit(1719333332.270:752): table=filter:131 family=2 entries=20 op=nft_register_rule pid=5213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:32.270000 audit[5213]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:32.270000 audit[5213]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd854ff980 a2=0 a3=7ffd854ff96c items=0 ppid=2604 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:32.276184 kernel: audit: type=1300 audit(1719333332.270:752): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd854ff980 a2=0 a3=7ffd854ff96c items=0 ppid=2604 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:32.276223 kernel: audit: type=1327 audit(1719333332.270:752): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:32.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:32.271000 audit[5213]: NETFILTER_CFG table=nat:132 family=2 entries=106 op=nft_register_chain pid=5213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:32.278463 kernel: audit: type=1325 audit(1719333332.271:753): table=nat:132 family=2 entries=106 op=nft_register_chain pid=5213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:35:32.278492 kernel: audit: type=1300 audit(1719333332.271:753): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd854ff980 a2=0 a3=7ffd854ff96c items=0 ppid=2604 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:32.271000 audit[5213]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd854ff980 a2=0 a3=7ffd854ff96c items=0 ppid=2604 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:32.280751 kernel: audit: type=1327 audit(1719333332.271:753): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:32.271000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:35:33.169971 systemd[1]: Started sshd@20-139.178.70.105:22-139.178.68.195:58334.service - OpenSSH per-connection server daemon (139.178.68.195:58334). Jun 25 16:35:33.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.105:22-139.178.68.195:58334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:33.173737 kernel: audit: type=1130 audit(1719333333.169:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.105:22-139.178.68.195:58334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:33.227000 audit[5218]: USER_ACCT pid=5218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.230154 sshd[5218]: Accepted publickey for core from 139.178.68.195 port 58334 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:33.229000 audit[5218]: CRED_ACQ pid=5218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.232494 sshd[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:33.234762 kernel: audit: type=1101 audit(1719333333.227:755): pid=5218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.234874 kernel: audit: type=1103 audit(1719333333.229:756): pid=5218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.236323 kernel: audit: type=1006 audit(1719333333.229:757): pid=5218 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:35:33.229000 audit[5218]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4f77f270 a2=3 a3=7f557a84d480 items=0 ppid=1 pid=5218 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:33.229000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:33.239143 systemd-logind[1326]: New session 23 of user core. Jun 25 16:35:33.242896 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:35:33.244000 audit[5218]: USER_START pid=5218 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.246000 audit[5220]: CRED_ACQ pid=5220 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.387806 sshd[5218]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:33.386000 audit[5218]: USER_END pid=5218 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.386000 audit[5218]: CRED_DISP pid=5218 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:33.389455 systemd-logind[1326]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:35:33.389651 systemd[1]: sshd@20-139.178.70.105:22-139.178.68.195:58334.service: Deactivated successfully. Jun 25 16:35:33.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.105:22-139.178.68.195:58334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:33.390109 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:35:33.390530 systemd-logind[1326]: Removed session 23. Jun 25 16:35:38.396291 systemd[1]: Started sshd@21-139.178.70.105:22-139.178.68.195:59138.service - OpenSSH per-connection server daemon (139.178.68.195:59138). Jun 25 16:35:38.399070 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:35:38.399116 kernel: audit: type=1130 audit(1719333338.394:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.105:22-139.178.68.195:59138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:38.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.105:22-139.178.68.195:59138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:38.455000 audit[5235]: USER_ACCT pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.457645 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 59138 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:38.459744 kernel: audit: type=1101 audit(1719333338.455:764): pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.458000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.463521 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:38.464830 kernel: audit: type=1103 audit(1719333338.458:765): pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.464874 kernel: audit: type=1006 audit(1719333338.458:766): pid=5235 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:35:38.464896 kernel: audit: type=1300 audit(1719333338.458:766): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe939c5ff0 a2=3 a3=7f60829b6480 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:38.458000 audit[5235]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe939c5ff0 a2=3 a3=7f60829b6480 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:38.467793 kernel: audit: type=1327 audit(1719333338.458:766): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:38.458000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:38.470294 systemd-logind[1326]: New session 24 of user core. Jun 25 16:35:38.474814 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:35:38.476000 audit[5235]: USER_START pid=5235 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.481201 kernel: audit: type=1105 audit(1719333338.476:767): pid=5235 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.479000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.483748 kernel: audit: type=1103 audit(1719333338.479:768): pid=5237 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.755632 sshd[5235]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:38.755000 audit[5235]: USER_END pid=5235 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.759303 systemd[1]: sshd@21-139.178.70.105:22-139.178.68.195:59138.service: Deactivated successfully. Jun 25 16:35:38.760331 kernel: audit: type=1106 audit(1719333338.755:769): pid=5235 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.760393 kernel: audit: type=1104 audit(1719333338.755:770): pid=5235 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.755000 audit[5235]: CRED_DISP pid=5235 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:38.760074 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:35:38.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.105:22-139.178.68.195:59138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:38.761523 systemd-logind[1326]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:35:38.762114 systemd-logind[1326]: Removed session 24. Jun 25 16:35:43.768403 systemd[1]: Started sshd@22-139.178.70.105:22-139.178.68.195:59148.service - OpenSSH per-connection server daemon (139.178.68.195:59148). Jun 25 16:35:43.769832 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:35:43.769878 kernel: audit: type=1130 audit(1719333343.766:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.105:22-139.178.68.195:59148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:43.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.105:22-139.178.68.195:59148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:43.804000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.809018 sshd[5247]: Accepted publickey for core from 139.178.68.195 port 59148 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:35:43.809743 kernel: audit: type=1101 audit(1719333343.804:773): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.808000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.811906 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:35:43.812813 kernel: audit: type=1103 audit(1719333343.808:774): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.812857 kernel: audit: type=1006 audit(1719333343.808:775): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:35:43.808000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe079db7f0 a2=3 a3=7f586291a480 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:43.816983 kernel: audit: type=1300 audit(1719333343.808:775): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe079db7f0 a2=3 a3=7f586291a480 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:35:43.808000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:43.817496 systemd-logind[1326]: New session 25 of user core. Jun 25 16:35:43.820203 kernel: audit: type=1327 audit(1719333343.808:775): proctitle=737368643A20636F7265205B707269765D Jun 25 16:35:43.819866 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:35:43.821000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.822000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.828378 kernel: audit: type=1105 audit(1719333343.821:776): pid=5247 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.828416 kernel: audit: type=1103 audit(1719333343.822:777): pid=5249 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.926755 sshd[5247]: pam_unix(sshd:session): session closed for user core Jun 25 16:35:43.925000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.928653 systemd-logind[1326]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:35:43.925000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.929478 systemd[1]: sshd@22-139.178.70.105:22-139.178.68.195:59148.service: Deactivated successfully. Jun 25 16:35:43.929952 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:35:43.930865 systemd-logind[1326]: Removed session 25. Jun 25 16:35:43.931482 kernel: audit: type=1106 audit(1719333343.925:778): pid=5247 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.931655 kernel: audit: type=1104 audit(1719333343.925:779): pid=5247 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:35:43.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.105:22-139.178.68.195:59148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:35:44.690000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:44.690000 audit[2295]: AVC avc: denied { watch } for pid=2295 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c480,c1023 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:44.690000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002736840 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:35:44.690000 audit[2295]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0025fdd80 a2=fc6 a3=0 items=0 ppid=2134 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c480,c1023 key=(null) Jun 25 16:35:44.690000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:35:44.690000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:35:45.567000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:45.567000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=1041980 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:45.567000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c00389a270 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:35:45.567000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:35:45.567000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=78 a1=c008061380 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:35:45.567000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:35:45.570000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=1041986 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:45.570000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c0080614a0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:35:45.570000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:35:45.570000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:45.570000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c005e7b6e0 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:35:45.570000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:35:45.572000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=1041984 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:45.572000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c00389a480 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:35:45.572000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jun 25 16:35:45.572000 audit[2286]: AVC avc: denied { watch } for pid=2286 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=1041978 scontext=system_u:system_r:container_t:s0:c449,c892 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:35:45.572000 audit[2286]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=78 a1=c005e7b860 a2=fc6 a3=0 items=0 ppid=2124 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c449,c892 key=(null) Jun 25 16:35:45.572000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313035002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B