Jun 25 16:14:38.735072 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:14:38.735086 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:14:38.735092 kernel: Disabled fast string operations Jun 25 16:14:38.735096 kernel: BIOS-provided physical RAM map: Jun 25 16:14:38.735099 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jun 25 16:14:38.735103 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jun 25 16:14:38.735108 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jun 25 16:14:38.735112 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jun 25 16:14:38.735116 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jun 25 16:14:38.735119 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jun 25 16:14:38.735123 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jun 25 16:14:38.735127 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jun 25 16:14:38.735130 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jun 25 16:14:38.735134 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jun 25 16:14:38.735140 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jun 25 16:14:38.735144 kernel: NX (Execute Disable) protection: active Jun 25 16:14:38.735148 kernel: SMBIOS 2.7 present. Jun 25 16:14:38.735152 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jun 25 16:14:38.735156 kernel: vmware: hypercall mode: 0x00 Jun 25 16:14:38.735160 kernel: Hypervisor detected: VMware Jun 25 16:14:38.735164 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jun 25 16:14:38.735169 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jun 25 16:14:38.735173 kernel: vmware: using clock offset of 2909889194 ns Jun 25 16:14:38.735179 kernel: tsc: Detected 3408.000 MHz processor Jun 25 16:14:38.735187 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:14:38.735192 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:14:38.735197 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jun 25 16:14:38.735201 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:14:38.735205 kernel: total RAM covered: 3072M Jun 25 16:14:38.735209 kernel: Found optimal setting for mtrr clean up Jun 25 16:14:38.735214 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jun 25 16:14:38.735220 kernel: Using GB pages for direct mapping Jun 25 16:14:38.735224 kernel: ACPI: Early table checksum verification disabled Jun 25 16:14:38.735228 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jun 25 16:14:38.735233 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jun 25 16:14:38.735237 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jun 25 16:14:38.735241 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jun 25 16:14:38.735245 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 25 16:14:38.735249 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 25 16:14:38.735255 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jun 25 16:14:38.735261 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jun 25 16:14:38.735266 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jun 25 16:14:38.735271 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jun 25 16:14:38.735275 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jun 25 16:14:38.735280 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jun 25 16:14:38.735286 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jun 25 16:14:38.735290 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jun 25 16:14:38.735295 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 25 16:14:38.735300 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 25 16:14:38.735304 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jun 25 16:14:38.735309 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jun 25 16:14:38.735313 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jun 25 16:14:38.735318 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jun 25 16:14:38.735323 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jun 25 16:14:38.735329 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jun 25 16:14:38.735333 kernel: system APIC only can use physical flat Jun 25 16:14:38.735338 kernel: Setting APIC routing to physical flat. Jun 25 16:14:38.735343 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:14:38.735350 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jun 25 16:14:38.735357 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jun 25 16:14:38.735362 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jun 25 16:14:38.735366 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jun 25 16:14:38.735371 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jun 25 16:14:38.735375 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jun 25 16:14:38.735381 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jun 25 16:14:38.735385 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jun 25 16:14:38.735390 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jun 25 16:14:38.735394 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jun 25 16:14:38.735399 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jun 25 16:14:38.735403 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jun 25 16:14:38.735408 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jun 25 16:14:38.735412 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jun 25 16:14:38.735417 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jun 25 16:14:38.735421 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jun 25 16:14:38.735426 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jun 25 16:14:38.735431 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jun 25 16:14:38.735436 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jun 25 16:14:38.735440 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jun 25 16:14:38.735444 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jun 25 16:14:38.735449 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jun 25 16:14:38.735454 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jun 25 16:14:38.735458 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jun 25 16:14:38.735462 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jun 25 16:14:38.735467 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jun 25 16:14:38.735472 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jun 25 16:14:38.735477 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jun 25 16:14:38.735482 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jun 25 16:14:38.735486 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jun 25 16:14:38.735490 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jun 25 16:14:38.735495 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jun 25 16:14:38.735500 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jun 25 16:14:38.735504 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jun 25 16:14:38.735508 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jun 25 16:14:38.735513 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jun 25 16:14:38.735517 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jun 25 16:14:38.735523 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jun 25 16:14:38.735528 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jun 25 16:14:38.735532 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jun 25 16:14:38.735538 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jun 25 16:14:38.735545 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jun 25 16:14:38.735550 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jun 25 16:14:38.735555 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jun 25 16:14:38.735559 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jun 25 16:14:38.735564 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jun 25 16:14:38.735568 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jun 25 16:14:38.735574 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jun 25 16:14:38.735578 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jun 25 16:14:38.735583 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jun 25 16:14:38.735587 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jun 25 16:14:38.735592 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jun 25 16:14:38.735596 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jun 25 16:14:38.735601 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jun 25 16:14:38.735605 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jun 25 16:14:38.735610 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jun 25 16:14:38.735614 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jun 25 16:14:38.735618 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jun 25 16:14:38.735624 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jun 25 16:14:38.735629 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jun 25 16:14:38.735637 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jun 25 16:14:38.735642 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jun 25 16:14:38.735647 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jun 25 16:14:38.735652 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jun 25 16:14:38.735657 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jun 25 16:14:38.735662 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jun 25 16:14:38.735667 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jun 25 16:14:38.735672 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jun 25 16:14:38.735677 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jun 25 16:14:38.735682 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jun 25 16:14:38.735687 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jun 25 16:14:38.735691 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jun 25 16:14:38.735698 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jun 25 16:14:38.735706 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jun 25 16:14:38.735711 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jun 25 16:14:38.735716 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jun 25 16:14:38.735721 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jun 25 16:14:38.735727 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jun 25 16:14:38.735732 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jun 25 16:14:38.735744 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jun 25 16:14:38.735750 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jun 25 16:14:38.735755 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jun 25 16:14:38.735759 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jun 25 16:14:38.735764 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jun 25 16:14:38.735769 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jun 25 16:14:38.735774 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jun 25 16:14:38.735778 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jun 25 16:14:38.735785 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jun 25 16:14:38.735790 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jun 25 16:14:38.735795 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jun 25 16:14:38.735799 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jun 25 16:14:38.735804 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jun 25 16:14:38.735809 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jun 25 16:14:38.735814 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jun 25 16:14:38.735818 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jun 25 16:14:38.735823 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jun 25 16:14:38.735828 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jun 25 16:14:38.735832 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jun 25 16:14:38.735838 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jun 25 16:14:38.735845 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jun 25 16:14:38.735852 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jun 25 16:14:38.735858 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jun 25 16:14:38.735863 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jun 25 16:14:38.735868 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jun 25 16:14:38.735873 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jun 25 16:14:38.735877 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jun 25 16:14:38.735882 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jun 25 16:14:38.735890 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jun 25 16:14:38.735896 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jun 25 16:14:38.735901 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jun 25 16:14:38.735906 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jun 25 16:14:38.735911 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jun 25 16:14:38.735915 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jun 25 16:14:38.735921 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jun 25 16:14:38.735925 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jun 25 16:14:38.735930 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jun 25 16:14:38.735935 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jun 25 16:14:38.735939 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jun 25 16:14:38.735945 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jun 25 16:14:38.735950 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jun 25 16:14:38.735955 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jun 25 16:14:38.735960 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jun 25 16:14:38.735965 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jun 25 16:14:38.735970 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jun 25 16:14:38.735974 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jun 25 16:14:38.735979 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jun 25 16:14:38.735984 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jun 25 16:14:38.735989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 25 16:14:38.735995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 25 16:14:38.736000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jun 25 16:14:38.736005 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jun 25 16:14:38.736010 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jun 25 16:14:38.736016 kernel: Zone ranges: Jun 25 16:14:38.736021 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:14:38.736025 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jun 25 16:14:38.736030 kernel: Normal empty Jun 25 16:14:38.736035 kernel: Movable zone start for each node Jun 25 16:14:38.736040 kernel: Early memory node ranges Jun 25 16:14:38.736046 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jun 25 16:14:38.736051 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jun 25 16:14:38.736056 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jun 25 16:14:38.736060 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jun 25 16:14:38.736066 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:14:38.736073 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jun 25 16:14:38.736081 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jun 25 16:14:38.736086 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 25 16:14:38.736091 kernel: system APIC only can use physical flat Jun 25 16:14:38.736097 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 25 16:14:38.736102 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 25 16:14:38.736107 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jun 25 16:14:38.736112 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jun 25 16:14:38.736117 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jun 25 16:14:38.736122 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jun 25 16:14:38.736127 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jun 25 16:14:38.736132 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jun 25 16:14:38.736137 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jun 25 16:14:38.736142 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jun 25 16:14:38.736147 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jun 25 16:14:38.736152 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jun 25 16:14:38.736157 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jun 25 16:14:38.736162 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jun 25 16:14:38.736167 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jun 25 16:14:38.736172 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jun 25 16:14:38.736177 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jun 25 16:14:38.736181 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jun 25 16:14:38.736186 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jun 25 16:14:38.736191 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jun 25 16:14:38.736196 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jun 25 16:14:38.736201 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jun 25 16:14:38.736208 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jun 25 16:14:38.736215 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jun 25 16:14:38.736220 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jun 25 16:14:38.736225 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jun 25 16:14:38.736230 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jun 25 16:14:38.736235 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jun 25 16:14:38.736240 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jun 25 16:14:38.736246 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jun 25 16:14:38.736251 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jun 25 16:14:38.736255 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jun 25 16:14:38.736260 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jun 25 16:14:38.736265 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jun 25 16:14:38.736270 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jun 25 16:14:38.736275 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jun 25 16:14:38.736279 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jun 25 16:14:38.736284 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jun 25 16:14:38.736289 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jun 25 16:14:38.736295 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jun 25 16:14:38.736300 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jun 25 16:14:38.736304 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jun 25 16:14:38.736309 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jun 25 16:14:38.736314 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jun 25 16:14:38.736319 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jun 25 16:14:38.736323 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jun 25 16:14:38.736328 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jun 25 16:14:38.736333 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jun 25 16:14:38.736338 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jun 25 16:14:38.736344 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jun 25 16:14:38.736348 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jun 25 16:14:38.736353 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jun 25 16:14:38.736358 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jun 25 16:14:38.736365 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jun 25 16:14:38.736371 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jun 25 16:14:38.736376 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jun 25 16:14:38.736381 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jun 25 16:14:38.736386 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jun 25 16:14:38.736391 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jun 25 16:14:38.736396 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jun 25 16:14:38.736401 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jun 25 16:14:38.736406 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jun 25 16:14:38.736411 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jun 25 16:14:38.736416 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jun 25 16:14:38.736420 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jun 25 16:14:38.736425 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jun 25 16:14:38.736430 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jun 25 16:14:38.736435 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jun 25 16:14:38.736440 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jun 25 16:14:38.736446 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jun 25 16:14:38.736450 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jun 25 16:14:38.736455 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jun 25 16:14:38.736460 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jun 25 16:14:38.736465 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jun 25 16:14:38.736470 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jun 25 16:14:38.736475 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jun 25 16:14:38.736480 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jun 25 16:14:38.736484 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jun 25 16:14:38.736490 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jun 25 16:14:38.736495 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jun 25 16:14:38.736500 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jun 25 16:14:38.736505 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jun 25 16:14:38.736510 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jun 25 16:14:38.736515 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jun 25 16:14:38.736519 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jun 25 16:14:38.736524 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jun 25 16:14:38.736529 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jun 25 16:14:38.736534 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jun 25 16:14:38.736539 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jun 25 16:14:38.736544 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jun 25 16:14:38.736549 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jun 25 16:14:38.736556 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jun 25 16:14:38.736564 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jun 25 16:14:38.736568 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jun 25 16:14:38.736573 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jun 25 16:14:38.736578 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jun 25 16:14:38.736583 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jun 25 16:14:38.736588 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jun 25 16:14:38.736594 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jun 25 16:14:38.736599 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jun 25 16:14:38.736603 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jun 25 16:14:38.736608 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jun 25 16:14:38.736613 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jun 25 16:14:38.736618 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jun 25 16:14:38.736622 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jun 25 16:14:38.736627 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jun 25 16:14:38.736632 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jun 25 16:14:38.736637 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jun 25 16:14:38.736643 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jun 25 16:14:38.736647 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jun 25 16:14:38.736652 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jun 25 16:14:38.736657 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jun 25 16:14:38.736662 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jun 25 16:14:38.736667 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jun 25 16:14:38.736671 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jun 25 16:14:38.736676 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jun 25 16:14:38.736681 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jun 25 16:14:38.736687 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jun 25 16:14:38.736691 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jun 25 16:14:38.736696 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jun 25 16:14:38.736701 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jun 25 16:14:38.736706 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jun 25 16:14:38.736711 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jun 25 16:14:38.736715 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jun 25 16:14:38.736720 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jun 25 16:14:38.736725 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jun 25 16:14:38.736730 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jun 25 16:14:38.736741 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jun 25 16:14:38.736748 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:14:38.736755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 25 16:14:38.736760 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:14:38.736765 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jun 25 16:14:38.736770 kernel: TSC deadline timer available Jun 25 16:14:38.736775 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jun 25 16:14:38.736780 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jun 25 16:14:38.736785 kernel: Booting paravirtualized kernel on VMware hypervisor Jun 25 16:14:38.736790 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:14:38.736797 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jun 25 16:14:38.736802 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u262144 Jun 25 16:14:38.736807 kernel: pcpu-alloc: s194792 r8192 d30488 u262144 alloc=1*2097152 Jun 25 16:14:38.736812 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jun 25 16:14:38.736817 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jun 25 16:14:38.736822 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jun 25 16:14:38.736826 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jun 25 16:14:38.736831 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jun 25 16:14:38.736837 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jun 25 16:14:38.736842 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jun 25 16:14:38.736854 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jun 25 16:14:38.736860 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jun 25 16:14:38.736865 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jun 25 16:14:38.736871 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jun 25 16:14:38.736876 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jun 25 16:14:38.736881 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jun 25 16:14:38.736886 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jun 25 16:14:38.736892 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jun 25 16:14:38.736897 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jun 25 16:14:38.736902 kernel: Fallback order for Node 0: 0 Jun 25 16:14:38.736908 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jun 25 16:14:38.736914 kernel: Policy zone: DMA32 Jun 25 16:14:38.736920 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:14:38.736936 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:14:38.736942 kernel: random: crng init done Jun 25 16:14:38.736949 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jun 25 16:14:38.736955 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jun 25 16:14:38.736960 kernel: printk: log_buf_len min size: 262144 bytes Jun 25 16:14:38.736965 kernel: printk: log_buf_len: 1048576 bytes Jun 25 16:14:38.736970 kernel: printk: early log buf free: 239640(91%) Jun 25 16:14:38.736975 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:14:38.736981 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:14:38.736986 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:14:38.736991 kernel: Memory: 1933736K/2096628K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 162632K reserved, 0K cma-reserved) Jun 25 16:14:38.736998 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jun 25 16:14:38.737004 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:14:38.737012 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:14:38.737019 kernel: Dynamic Preempt: voluntary Jun 25 16:14:38.737026 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:14:38.737031 kernel: rcu: RCU event tracing is enabled. Jun 25 16:14:38.737038 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jun 25 16:14:38.737043 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:14:38.737049 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:14:38.737054 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:14:38.737059 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:14:38.737064 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jun 25 16:14:38.737069 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jun 25 16:14:38.737075 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jun 25 16:14:38.737080 kernel: Console: colour VGA+ 80x25 Jun 25 16:14:38.737086 kernel: printk: console [tty0] enabled Jun 25 16:14:38.737092 kernel: printk: console [ttyS0] enabled Jun 25 16:14:38.737097 kernel: ACPI: Core revision 20220331 Jun 25 16:14:38.737102 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jun 25 16:14:38.737108 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:14:38.737113 kernel: x2apic enabled Jun 25 16:14:38.737118 kernel: Switched APIC routing to physical x2apic. Jun 25 16:14:38.737124 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:14:38.737129 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 25 16:14:38.737135 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jun 25 16:14:38.737141 kernel: Disabled fast string operations Jun 25 16:14:38.737146 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:14:38.737152 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:14:38.737157 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:14:38.737162 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jun 25 16:14:38.737167 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jun 25 16:14:38.737173 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jun 25 16:14:38.737178 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:14:38.737185 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jun 25 16:14:38.737190 kernel: RETBleed: Mitigation: Enhanced IBRS Jun 25 16:14:38.737195 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:14:38.737201 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:14:38.737206 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:14:38.737212 kernel: SRBDS: Unknown: Dependent on hypervisor status Jun 25 16:14:38.737217 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:14:38.737222 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:14:38.737227 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:14:38.737234 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:14:38.737239 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:14:38.737244 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 16:14:38.737250 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:14:38.737256 kernel: pid_max: default: 131072 minimum: 1024 Jun 25 16:14:38.737261 kernel: LSM: Security Framework initializing Jun 25 16:14:38.737266 kernel: SELinux: Initializing. Jun 25 16:14:38.737271 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:14:38.737277 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:14:38.737283 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jun 25 16:14:38.737289 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:14:38.737294 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:14:38.737299 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:14:38.737304 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:14:38.737310 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:14:38.737315 kernel: cblist_init_generic: Setting shift to 7 and lim to 1. Jun 25 16:14:38.737321 kernel: Performance Events: Skylake events, core PMU driver. Jun 25 16:14:38.737330 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jun 25 16:14:38.737336 kernel: core: CPUID marked event: 'instructions' unavailable Jun 25 16:14:38.737343 kernel: core: CPUID marked event: 'bus cycles' unavailable Jun 25 16:14:38.737348 kernel: core: CPUID marked event: 'cache references' unavailable Jun 25 16:14:38.737353 kernel: core: CPUID marked event: 'cache misses' unavailable Jun 25 16:14:38.737358 kernel: core: CPUID marked event: 'branch instructions' unavailable Jun 25 16:14:38.737363 kernel: core: CPUID marked event: 'branch misses' unavailable Jun 25 16:14:38.737369 kernel: ... version: 1 Jun 25 16:14:38.737374 kernel: ... bit width: 48 Jun 25 16:14:38.737379 kernel: ... generic registers: 4 Jun 25 16:14:38.737384 kernel: ... value mask: 0000ffffffffffff Jun 25 16:14:38.737390 kernel: ... max period: 000000007fffffff Jun 25 16:14:38.737395 kernel: ... fixed-purpose events: 0 Jun 25 16:14:38.737401 kernel: ... event mask: 000000000000000f Jun 25 16:14:38.737406 kernel: signal: max sigframe size: 1776 Jun 25 16:14:38.737411 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:14:38.737417 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:14:38.737422 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:14:38.737427 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:14:38.737432 kernel: x86: Booting SMP configuration: Jun 25 16:14:38.737438 kernel: .... node #0, CPUs: #1 Jun 25 16:14:38.737444 kernel: Disabled fast string operations Jun 25 16:14:38.737449 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jun 25 16:14:38.737454 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jun 25 16:14:38.737459 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:14:38.737464 kernel: smpboot: Max logical packages: 128 Jun 25 16:14:38.737470 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jun 25 16:14:38.737475 kernel: devtmpfs: initialized Jun 25 16:14:38.737483 kernel: x86/mm: Memory block size: 128MB Jun 25 16:14:38.737490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jun 25 16:14:38.737496 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:14:38.737502 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jun 25 16:14:38.737507 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:14:38.737512 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:14:38.737517 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:14:38.737523 kernel: audit: type=2000 audit(1719332077.063:1): state=initialized audit_enabled=0 res=1 Jun 25 16:14:38.737528 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:14:38.737533 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:14:38.737538 kernel: cpuidle: using governor menu Jun 25 16:14:38.737544 kernel: Simple Boot Flag at 0x36 set to 0x80 Jun 25 16:14:38.737550 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:14:38.737555 kernel: dca service started, version 1.12.1 Jun 25 16:14:38.737560 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jun 25 16:14:38.737566 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jun 25 16:14:38.737571 kernel: PCI: Using configuration type 1 for base access Jun 25 16:14:38.737576 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:14:38.737582 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:14:38.737588 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:14:38.737594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:14:38.737599 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:14:38.737604 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:14:38.737610 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:14:38.737615 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:14:38.737621 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:14:38.737627 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:14:38.737632 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jun 25 16:14:38.737638 kernel: ACPI: Interpreter enabled Jun 25 16:14:38.737643 kernel: ACPI: PM: (supports S0 S1 S5) Jun 25 16:14:38.737649 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:14:38.737654 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:14:38.737659 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:14:38.737664 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jun 25 16:14:38.737670 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jun 25 16:14:38.737796 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:14:38.737852 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jun 25 16:14:38.737898 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jun 25 16:14:38.737906 kernel: PCI host bridge to bus 0000:00 Jun 25 16:14:38.737954 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:14:38.737996 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jun 25 16:14:38.738037 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 25 16:14:38.738077 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:14:38.738119 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jun 25 16:14:38.738159 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jun 25 16:14:38.738213 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jun 25 16:14:38.738265 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jun 25 16:14:38.738318 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jun 25 16:14:38.738370 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jun 25 16:14:38.738420 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jun 25 16:14:38.738466 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:14:38.738512 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:14:38.738558 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:14:38.738604 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:14:38.738655 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jun 25 16:14:38.738702 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jun 25 16:14:38.738786 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jun 25 16:14:38.738843 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jun 25 16:14:38.738895 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jun 25 16:14:38.738943 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jun 25 16:14:38.738992 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jun 25 16:14:38.739039 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jun 25 16:14:38.739088 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jun 25 16:14:38.739134 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jun 25 16:14:38.739180 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jun 25 16:14:38.739241 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:14:38.739298 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jun 25 16:14:38.739350 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.739404 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.739459 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.739511 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.739563 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.739616 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.739666 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.739713 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.739780 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.739835 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.739896 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.739944 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.739996 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740044 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740097 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740157 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740211 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740259 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740308 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740357 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740421 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740468 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740520 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740566 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740616 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740662 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740721 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740785 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740849 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.740909 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.740961 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741009 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.741063 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741111 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.741160 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741207 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.741256 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741304 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.741357 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741404 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.741454 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741501 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.741550 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741597 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.741649 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.741698 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742073 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742126 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742179 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742227 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742276 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742327 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742376 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742423 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742473 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742520 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742569 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742619 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742669 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742717 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742784 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742836 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742891 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jun 25 16:14:38.742943 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.742992 kernel: pci_bus 0000:01: extended config space not accessible Jun 25 16:14:38.743040 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 25 16:14:38.743089 kernel: pci_bus 0000:02: extended config space not accessible Jun 25 16:14:38.743098 kernel: acpiphp: Slot [32] registered Jun 25 16:14:38.743104 kernel: acpiphp: Slot [33] registered Jun 25 16:14:38.743109 kernel: acpiphp: Slot [34] registered Jun 25 16:14:38.743115 kernel: acpiphp: Slot [35] registered Jun 25 16:14:38.743122 kernel: acpiphp: Slot [36] registered Jun 25 16:14:38.743128 kernel: acpiphp: Slot [37] registered Jun 25 16:14:38.743133 kernel: acpiphp: Slot [38] registered Jun 25 16:14:38.743138 kernel: acpiphp: Slot [39] registered Jun 25 16:14:38.743144 kernel: acpiphp: Slot [40] registered Jun 25 16:14:38.743149 kernel: acpiphp: Slot [41] registered Jun 25 16:14:38.743154 kernel: acpiphp: Slot [42] registered Jun 25 16:14:38.743159 kernel: acpiphp: Slot [43] registered Jun 25 16:14:38.743164 kernel: acpiphp: Slot [44] registered Jun 25 16:14:38.743171 kernel: acpiphp: Slot [45] registered Jun 25 16:14:38.743176 kernel: acpiphp: Slot [46] registered Jun 25 16:14:38.743181 kernel: acpiphp: Slot [47] registered Jun 25 16:14:38.743187 kernel: acpiphp: Slot [48] registered Jun 25 16:14:38.743192 kernel: acpiphp: Slot [49] registered Jun 25 16:14:38.743197 kernel: acpiphp: Slot [50] registered Jun 25 16:14:38.743202 kernel: acpiphp: Slot [51] registered Jun 25 16:14:38.743208 kernel: acpiphp: Slot [52] registered Jun 25 16:14:38.743213 kernel: acpiphp: Slot [53] registered Jun 25 16:14:38.743218 kernel: acpiphp: Slot [54] registered Jun 25 16:14:38.743224 kernel: acpiphp: Slot [55] registered Jun 25 16:14:38.743229 kernel: acpiphp: Slot [56] registered Jun 25 16:14:38.743235 kernel: acpiphp: Slot [57] registered Jun 25 16:14:38.743240 kernel: acpiphp: Slot [58] registered Jun 25 16:14:38.743245 kernel: acpiphp: Slot [59] registered Jun 25 16:14:38.743250 kernel: acpiphp: Slot [60] registered Jun 25 16:14:38.743256 kernel: acpiphp: Slot [61] registered Jun 25 16:14:38.743261 kernel: acpiphp: Slot [62] registered Jun 25 16:14:38.743266 kernel: acpiphp: Slot [63] registered Jun 25 16:14:38.743314 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 25 16:14:38.743360 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 25 16:14:38.743406 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 25 16:14:38.743453 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:14:38.743498 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jun 25 16:14:38.743545 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jun 25 16:14:38.743590 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jun 25 16:14:38.743640 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jun 25 16:14:38.743686 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jun 25 16:14:38.743773 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jun 25 16:14:38.743832 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jun 25 16:14:38.743887 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jun 25 16:14:38.743935 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jun 25 16:14:38.743994 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jun 25 16:14:38.744054 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 25 16:14:38.744111 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 25 16:14:38.744164 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 25 16:14:38.744224 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 25 16:14:38.744286 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 25 16:14:38.744334 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 25 16:14:38.744381 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 25 16:14:38.744433 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:14:38.744497 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 25 16:14:38.744545 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 25 16:14:38.744592 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 25 16:14:38.744658 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:14:38.744712 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 25 16:14:38.744772 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 25 16:14:38.744830 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:14:38.744887 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 25 16:14:38.744949 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 25 16:14:38.745002 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:14:38.745067 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 25 16:14:38.745124 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 25 16:14:38.745180 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:14:38.745238 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 25 16:14:38.745298 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 25 16:14:38.745355 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:14:38.745410 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 25 16:14:38.745465 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 25 16:14:38.745518 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:14:38.745578 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jun 25 16:14:38.745643 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jun 25 16:14:38.745695 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jun 25 16:14:38.748838 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jun 25 16:14:38.748910 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jun 25 16:14:38.748973 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jun 25 16:14:38.749033 kernel: pci 0000:0b:00.0: supports D1 D2 Jun 25 16:14:38.749083 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 16:14:38.749135 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 25 16:14:38.749188 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 25 16:14:38.749240 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 25 16:14:38.749288 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 25 16:14:38.749336 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 25 16:14:38.749383 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 25 16:14:38.749429 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 25 16:14:38.749475 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:14:38.749533 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 25 16:14:38.749583 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 25 16:14:38.749628 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 25 16:14:38.749674 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:14:38.749723 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 25 16:14:38.749818 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 25 16:14:38.749866 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:14:38.749925 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 25 16:14:38.749975 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 25 16:14:38.750021 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:14:38.750069 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 25 16:14:38.750115 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 25 16:14:38.750160 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:14:38.750207 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 25 16:14:38.750254 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 25 16:14:38.750300 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:14:38.750352 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 25 16:14:38.750398 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 25 16:14:38.750444 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:14:38.750494 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 25 16:14:38.750546 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 25 16:14:38.750593 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 25 16:14:38.750647 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:14:38.750697 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 25 16:14:38.750841 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 25 16:14:38.750891 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 25 16:14:38.750937 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:14:38.750992 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 25 16:14:38.751045 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 25 16:14:38.751093 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 25 16:14:38.751139 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:14:38.751191 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 25 16:14:38.751237 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 25 16:14:38.751282 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:14:38.751330 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 25 16:14:38.751376 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 25 16:14:38.751422 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:14:38.751469 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 25 16:14:38.751515 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 25 16:14:38.751564 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:14:38.751611 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 25 16:14:38.751656 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 25 16:14:38.751702 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:14:38.751769 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 25 16:14:38.751818 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 25 16:14:38.751864 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:14:38.751920 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 25 16:14:38.751971 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 25 16:14:38.752018 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 25 16:14:38.752064 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:14:38.752113 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 25 16:14:38.752165 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 25 16:14:38.752213 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 25 16:14:38.752259 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:14:38.752307 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 25 16:14:38.752356 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 25 16:14:38.752402 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:14:38.752450 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 25 16:14:38.752497 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 25 16:14:38.752542 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:14:38.752591 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 25 16:14:38.752643 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 25 16:14:38.752690 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:14:38.752787 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 25 16:14:38.752838 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 25 16:14:38.752888 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:14:38.752938 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 25 16:14:38.752985 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 25 16:14:38.753030 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:14:38.753079 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 25 16:14:38.753125 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 25 16:14:38.753174 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:14:38.753183 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jun 25 16:14:38.753188 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jun 25 16:14:38.753194 kernel: ACPI: PCI: Interrupt link LNKB disabled Jun 25 16:14:38.753200 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:14:38.753205 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jun 25 16:14:38.753211 kernel: iommu: Default domain type: Translated Jun 25 16:14:38.753216 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:14:38.753222 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:14:38.753230 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:14:38.753236 kernel: PTP clock support registered Jun 25 16:14:38.753241 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:14:38.753246 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:14:38.753252 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jun 25 16:14:38.753258 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jun 25 16:14:38.753304 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jun 25 16:14:38.753351 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jun 25 16:14:38.753397 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:14:38.753407 kernel: vgaarb: loaded Jun 25 16:14:38.753412 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jun 25 16:14:38.753420 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jun 25 16:14:38.753429 kernel: clocksource: Switched to clocksource tsc-early Jun 25 16:14:38.753434 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:14:38.753440 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:14:38.753446 kernel: pnp: PnP ACPI init Jun 25 16:14:38.753500 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jun 25 16:14:38.753547 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jun 25 16:14:38.753589 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jun 25 16:14:38.753633 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jun 25 16:14:38.753679 kernel: pnp 00:06: [dma 2] Jun 25 16:14:38.753724 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jun 25 16:14:38.753787 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jun 25 16:14:38.753831 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jun 25 16:14:38.753841 kernel: pnp: PnP ACPI: found 8 devices Jun 25 16:14:38.753847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:14:38.753853 kernel: NET: Registered PF_INET protocol family Jun 25 16:14:38.753858 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:14:38.753864 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:14:38.753869 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:14:38.753874 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:14:38.753880 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:14:38.753886 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:14:38.753892 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:14:38.753897 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:14:38.753903 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:14:38.753908 kernel: NET: Registered PF_XDP protocol family Jun 25 16:14:38.753958 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 25 16:14:38.754018 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 25 16:14:38.754069 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 25 16:14:38.754130 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 25 16:14:38.754179 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 25 16:14:38.754228 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jun 25 16:14:38.754276 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jun 25 16:14:38.754322 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jun 25 16:14:38.754369 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jun 25 16:14:38.754418 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jun 25 16:14:38.754465 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jun 25 16:14:38.754512 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jun 25 16:14:38.754564 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jun 25 16:14:38.754612 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jun 25 16:14:38.754662 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jun 25 16:14:38.754708 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jun 25 16:14:38.754775 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jun 25 16:14:38.754824 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jun 25 16:14:38.754877 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jun 25 16:14:38.754925 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jun 25 16:14:38.754984 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jun 25 16:14:38.755031 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jun 25 16:14:38.755077 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jun 25 16:14:38.755123 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:14:38.755170 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:14:38.755217 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755264 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.755313 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755359 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.755406 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755452 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.755498 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755545 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.755591 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755639 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.755687 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755733 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.755817 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755864 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.755916 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.755963 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756009 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756056 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756105 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756155 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756201 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756248 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756294 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756342 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756388 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756434 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756483 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756530 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756576 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756623 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756669 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756715 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756782 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756831 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756881 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.756929 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.756975 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757024 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757071 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757118 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757165 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757212 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757267 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757314 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757360 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757406 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757452 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757498 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757544 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757590 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757637 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757682 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757732 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757826 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757883 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.757940 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.757997 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758054 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758109 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758163 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758215 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758267 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758313 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758367 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758420 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758477 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758532 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758578 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758624 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758670 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758716 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758781 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758828 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758875 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.758926 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.758974 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.759020 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.759067 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.759130 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.759181 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.759230 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.759280 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.759342 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jun 25 16:14:38.759389 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jun 25 16:14:38.759436 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 25 16:14:38.759482 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jun 25 16:14:38.759529 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 25 16:14:38.759574 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 25 16:14:38.759620 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:14:38.759673 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jun 25 16:14:38.759721 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 25 16:14:38.759818 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 25 16:14:38.759865 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 25 16:14:38.759919 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:14:38.759966 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 25 16:14:38.760012 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 25 16:14:38.760059 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 25 16:14:38.760107 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:14:38.760155 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 25 16:14:38.760202 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 25 16:14:38.760248 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 25 16:14:38.760294 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:14:38.760341 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 25 16:14:38.760387 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 25 16:14:38.760433 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:14:38.760478 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 25 16:14:38.760525 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 25 16:14:38.760573 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:14:38.760623 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 25 16:14:38.760669 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 25 16:14:38.760715 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:14:38.760781 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 25 16:14:38.760831 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 25 16:14:38.760880 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:14:38.760928 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 25 16:14:38.760974 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 25 16:14:38.761020 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:14:38.761070 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jun 25 16:14:38.761118 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 25 16:14:38.761164 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 25 16:14:38.761210 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 25 16:14:38.761257 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:14:38.761307 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 25 16:14:38.761354 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 25 16:14:38.761400 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 25 16:14:38.761447 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:14:38.761495 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 25 16:14:38.761541 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 25 16:14:38.761587 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 25 16:14:38.761632 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:14:38.761678 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 25 16:14:38.761727 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 25 16:14:38.761828 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:14:38.761876 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 25 16:14:38.761922 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 25 16:14:38.761968 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:14:38.762014 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 25 16:14:38.762060 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 25 16:14:38.762106 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:14:38.762153 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 25 16:14:38.762198 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 25 16:14:38.762248 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:14:38.762294 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 25 16:14:38.762340 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 25 16:14:38.762399 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:14:38.762449 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 25 16:14:38.762494 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 25 16:14:38.762541 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 25 16:14:38.762586 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:14:38.762634 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 25 16:14:38.762682 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 25 16:14:38.762729 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 25 16:14:38.762791 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:14:38.762839 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 25 16:14:38.762886 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 25 16:14:38.762932 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 25 16:14:38.762977 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:14:38.763025 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 25 16:14:38.763070 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 25 16:14:38.763117 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:14:38.763167 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 25 16:14:38.763213 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 25 16:14:38.763259 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:14:38.763306 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 25 16:14:38.763352 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 25 16:14:38.763398 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:14:38.763446 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 25 16:14:38.763495 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 25 16:14:38.763542 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:14:38.763591 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 25 16:14:38.763637 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 25 16:14:38.763683 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:14:38.763731 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 25 16:14:38.763823 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 25 16:14:38.763869 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 25 16:14:38.763915 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:14:38.763962 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 25 16:14:38.764009 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 25 16:14:38.764054 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 25 16:14:38.764104 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:14:38.764150 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 25 16:14:38.764196 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 25 16:14:38.764241 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:14:38.764288 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 25 16:14:38.764335 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 25 16:14:38.764381 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:14:38.764429 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 25 16:14:38.764475 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 25 16:14:38.764524 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:14:38.764571 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 25 16:14:38.764617 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 25 16:14:38.764664 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:14:38.764710 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 25 16:14:38.764775 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 25 16:14:38.764823 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:14:38.764871 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 25 16:14:38.764922 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 25 16:14:38.764968 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:14:38.765017 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jun 25 16:14:38.765059 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 25 16:14:38.765100 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 25 16:14:38.765141 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jun 25 16:14:38.765182 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jun 25 16:14:38.765228 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jun 25 16:14:38.765271 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jun 25 16:14:38.765316 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 25 16:14:38.765359 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jun 25 16:14:38.765401 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 25 16:14:38.765444 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 25 16:14:38.765486 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jun 25 16:14:38.765528 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jun 25 16:14:38.765576 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jun 25 16:14:38.765620 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jun 25 16:14:38.765665 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jun 25 16:14:38.765712 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jun 25 16:14:38.765797 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jun 25 16:14:38.765840 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jun 25 16:14:38.765887 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jun 25 16:14:38.765930 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jun 25 16:14:38.765975 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jun 25 16:14:38.766024 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jun 25 16:14:38.766066 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jun 25 16:14:38.766113 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jun 25 16:14:38.766156 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 25 16:14:38.766203 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jun 25 16:14:38.766246 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jun 25 16:14:38.766301 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jun 25 16:14:38.766345 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jun 25 16:14:38.766391 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jun 25 16:14:38.766435 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jun 25 16:14:38.766483 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jun 25 16:14:38.766528 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jun 25 16:14:38.766570 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jun 25 16:14:38.766620 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jun 25 16:14:38.766662 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jun 25 16:14:38.766705 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jun 25 16:14:38.766762 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jun 25 16:14:38.766806 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jun 25 16:14:38.766851 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jun 25 16:14:38.766901 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jun 25 16:14:38.766944 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 25 16:14:38.766991 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jun 25 16:14:38.767035 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 25 16:14:38.767082 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jun 25 16:14:38.767129 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jun 25 16:14:38.767174 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jun 25 16:14:38.767218 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jun 25 16:14:38.767266 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jun 25 16:14:38.767310 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 25 16:14:38.767362 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jun 25 16:14:38.767406 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jun 25 16:14:38.767452 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 25 16:14:38.767498 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jun 25 16:14:38.767540 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jun 25 16:14:38.767583 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jun 25 16:14:38.767630 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jun 25 16:14:38.767673 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jun 25 16:14:38.767719 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jun 25 16:14:38.767772 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jun 25 16:14:38.767816 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 25 16:14:38.767865 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jun 25 16:14:38.767912 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 25 16:14:38.767963 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jun 25 16:14:38.768006 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jun 25 16:14:38.768056 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jun 25 16:14:38.768100 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jun 25 16:14:38.768150 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jun 25 16:14:38.768193 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 25 16:14:38.768240 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jun 25 16:14:38.768285 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jun 25 16:14:38.768328 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jun 25 16:14:38.768375 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jun 25 16:14:38.768418 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jun 25 16:14:38.768462 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jun 25 16:14:38.768509 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jun 25 16:14:38.768551 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jun 25 16:14:38.768601 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jun 25 16:14:38.768644 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 25 16:14:38.768693 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jun 25 16:14:38.768742 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jun 25 16:14:38.768791 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jun 25 16:14:38.768835 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jun 25 16:14:38.768884 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jun 25 16:14:38.768927 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jun 25 16:14:38.768974 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jun 25 16:14:38.769032 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 25 16:14:38.769085 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:14:38.769094 kernel: PCI: CLS 32 bytes, default 64 Jun 25 16:14:38.769101 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:14:38.769109 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 25 16:14:38.769115 kernel: clocksource: Switched to clocksource tsc Jun 25 16:14:38.769121 kernel: Initialise system trusted keyrings Jun 25 16:14:38.769127 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:14:38.769132 kernel: Key type asymmetric registered Jun 25 16:14:38.769138 kernel: Asymmetric key parser 'x509' registered Jun 25 16:14:38.769143 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:14:38.769149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:14:38.769155 kernel: io scheduler mq-deadline registered Jun 25 16:14:38.769161 kernel: io scheduler kyber registered Jun 25 16:14:38.769167 kernel: io scheduler bfq registered Jun 25 16:14:38.769215 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jun 25 16:14:38.769263 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.769312 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jun 25 16:14:38.769359 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.769407 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jun 25 16:14:38.769455 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.769504 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jun 25 16:14:38.769552 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.769600 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jun 25 16:14:38.769647 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.769694 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jun 25 16:14:38.769788 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.769839 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jun 25 16:14:38.769891 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.769940 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jun 25 16:14:38.769987 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770035 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jun 25 16:14:38.770085 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770132 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jun 25 16:14:38.770178 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770225 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jun 25 16:14:38.770273 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770321 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jun 25 16:14:38.770367 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770417 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jun 25 16:14:38.770464 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770512 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jun 25 16:14:38.770558 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770606 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jun 25 16:14:38.770656 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770703 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jun 25 16:14:38.770757 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770805 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jun 25 16:14:38.770852 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770898 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jun 25 16:14:38.770947 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.770994 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jun 25 16:14:38.771040 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771087 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jun 25 16:14:38.771134 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771181 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jun 25 16:14:38.771231 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771277 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jun 25 16:14:38.771323 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771370 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jun 25 16:14:38.771417 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771466 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jun 25 16:14:38.771513 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771561 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jun 25 16:14:38.771606 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771653 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jun 25 16:14:38.771698 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771751 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jun 25 16:14:38.771802 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771850 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jun 25 16:14:38.771897 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.771944 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jun 25 16:14:38.771991 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.772040 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jun 25 16:14:38.772086 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.772133 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jun 25 16:14:38.772180 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.772227 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jun 25 16:14:38.772276 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 25 16:14:38.772284 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:14:38.772291 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:14:38.772297 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:14:38.772303 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jun 25 16:14:38.772308 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:14:38.772314 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:14:38.772361 kernel: rtc_cmos 00:01: registered as rtc0 Jun 25 16:14:38.772408 kernel: rtc_cmos 00:01: setting system clock to 2024-06-25T16:14:38 UTC (1719332078) Jun 25 16:14:38.772450 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jun 25 16:14:38.772459 kernel: fail to initialize ptp_kvm Jun 25 16:14:38.772465 kernel: intel_pstate: CPU model not supported Jun 25 16:14:38.772471 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jun 25 16:14:38.772476 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:14:38.772482 kernel: Segment Routing with IPv6 Jun 25 16:14:38.772488 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:14:38.772495 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:14:38.772501 kernel: Key type dns_resolver registered Jun 25 16:14:38.772507 kernel: IPI shorthand broadcast: enabled Jun 25 16:14:38.772514 kernel: sched_clock: Marking stable (890232714, 220363841)->(1173891398, -63294843) Jun 25 16:14:38.772520 kernel: registered taskstats version 1 Jun 25 16:14:38.772525 kernel: Loading compiled-in X.509 certificates Jun 25 16:14:38.772531 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:14:38.772536 kernel: Key type .fscrypt registered Jun 25 16:14:38.772542 kernel: Key type fscrypt-provisioning registered Jun 25 16:14:38.772549 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:14:38.772554 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:14:38.772560 kernel: ima: No architecture policies found Jun 25 16:14:38.772566 kernel: clk: Disabling unused clocks Jun 25 16:14:38.772572 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:14:38.772577 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:14:38.772583 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:14:38.772589 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:14:38.772594 kernel: Run /init as init process Jun 25 16:14:38.772601 kernel: with arguments: Jun 25 16:14:38.772607 kernel: /init Jun 25 16:14:38.772612 kernel: with environment: Jun 25 16:14:38.772618 kernel: HOME=/ Jun 25 16:14:38.772623 kernel: TERM=linux Jun 25 16:14:38.772628 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:14:38.772636 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:14:38.772643 systemd[1]: Detected virtualization vmware. Jun 25 16:14:38.772650 systemd[1]: Detected architecture x86-64. Jun 25 16:14:38.772656 systemd[1]: Running in initrd. Jun 25 16:14:38.772662 systemd[1]: No hostname configured, using default hostname. Jun 25 16:14:38.772667 systemd[1]: Hostname set to . Jun 25 16:14:38.772673 systemd[1]: Initializing machine ID from random generator. Jun 25 16:14:38.772680 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:14:38.772686 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:14:38.772692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:14:38.772699 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:14:38.772704 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:14:38.772710 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:14:38.772716 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:14:38.772722 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:14:38.772728 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:14:38.772739 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:14:38.772746 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:14:38.772753 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:14:38.772759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:14:38.772765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:14:38.772771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:14:38.772777 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:14:38.772782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:14:38.772788 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:14:38.772794 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:14:38.772801 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:14:38.772807 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:14:38.772813 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:14:38.772819 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:14:38.772825 kernel: audit: type=1130 audit(1719332078.733:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.772831 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:14:38.772837 kernel: audit: type=1130 audit(1719332078.736:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.772843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:14:38.772850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:14:38.772856 kernel: audit: type=1130 audit(1719332078.765:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.772862 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:14:38.772868 kernel: audit: type=1130 audit(1719332078.769:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.772878 systemd-journald[212]: Journal started Jun 25 16:14:38.772923 systemd-journald[212]: Runtime Journal (/run/log/journal/b10f946f2e1a442290afeecd16915230) is 4.8M, max 38.7M, 33.9M free. Jun 25 16:14:38.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.774568 systemd-modules-load[213]: Inserted module 'overlay' Jun 25 16:14:38.777181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:14:38.783709 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:14:38.783733 kernel: audit: type=1130 audit(1719332078.777:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.781548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:14:38.796771 kernel: audit: type=1130 audit(1719332078.784:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.796802 kernel: audit: type=1334 audit(1719332078.788:8): prog-id=6 op=LOAD Jun 25 16:14:38.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.788000 audit: BPF prog-id=6 op=LOAD Jun 25 16:14:38.786413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:14:38.794643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:14:38.799745 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:14:38.801404 systemd-modules-load[213]: Inserted module 'br_netfilter' Jun 25 16:14:38.801744 kernel: Bridge firewalling registered Jun 25 16:14:38.801876 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:14:38.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.806745 kernel: audit: type=1130 audit(1719332078.802:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.808828 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:14:38.820775 kernel: SCSI subsystem initialized Jun 25 16:14:38.820809 dracut-cmdline[230]: dracut-dracut-053 Jun 25 16:14:38.822243 systemd-resolved[228]: Positive Trust Anchors: Jun 25 16:14:38.822535 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:14:38.822254 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:14:38.822290 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:14:38.825775 systemd-resolved[228]: Defaulting to hostname 'linux'. Jun 25 16:14:38.826296 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:14:38.829977 kernel: audit: type=1130 audit(1719332078.824:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.826662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:14:38.835755 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:14:38.835780 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:14:38.837752 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:14:38.840224 systemd-modules-load[213]: Inserted module 'dm_multipath' Jun 25 16:14:38.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.840674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:14:38.842877 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:14:38.846485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:14:38.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.872752 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:14:38.881752 kernel: iscsi: registered transport (tcp) Jun 25 16:14:38.896750 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:14:38.896791 kernel: QLogic iSCSI HBA Driver Jun 25 16:14:38.916548 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:14:38.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:38.921839 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:14:38.967753 kernel: raid6: avx2x4 gen() 47601 MB/s Jun 25 16:14:38.984752 kernel: raid6: avx2x2 gen() 53708 MB/s Jun 25 16:14:39.001938 kernel: raid6: avx2x1 gen() 42194 MB/s Jun 25 16:14:39.001961 kernel: raid6: using algorithm avx2x2 gen() 53708 MB/s Jun 25 16:14:39.019930 kernel: raid6: .... xor() 30509 MB/s, rmw enabled Jun 25 16:14:39.019966 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:14:39.022749 kernel: xor: automatically using best checksumming function avx Jun 25 16:14:39.114759 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:14:39.119751 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:14:39.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:39.118000 audit: BPF prog-id=7 op=LOAD Jun 25 16:14:39.118000 audit: BPF prog-id=8 op=LOAD Jun 25 16:14:39.125837 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:14:39.132953 systemd-udevd[412]: Using default interface naming scheme 'v252'. Jun 25 16:14:39.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:39.135585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:14:39.136123 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:14:39.143644 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jun 25 16:14:39.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:39.159687 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:14:39.164840 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:14:39.225455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:14:39.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:39.270037 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jun 25 16:14:39.270073 kernel: vmw_pvscsi: using 64bit dma Jun 25 16:14:39.270754 kernel: vmw_pvscsi: max_id: 16 Jun 25 16:14:39.270775 kernel: vmw_pvscsi: setting ring_pages to 8 Jun 25 16:14:39.272342 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jun 25 16:14:39.272357 kernel: vmw_pvscsi: enabling reqCallThreshold Jun 25 16:14:39.274505 kernel: vmw_pvscsi: driver-based request coalescing enabled Jun 25 16:14:39.274522 kernel: vmw_pvscsi: using MSI-X Jun 25 16:14:39.274530 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jun 25 16:14:39.294102 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jun 25 16:14:39.294179 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jun 25 16:14:39.294241 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jun 25 16:14:39.294321 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jun 25 16:14:39.305752 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:14:39.309287 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jun 25 16:14:39.317759 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:14:39.317793 kernel: AES CTR mode by8 optimization enabled Jun 25 16:14:39.328751 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jun 25 16:14:39.337306 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 16:14:39.337388 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jun 25 16:14:39.337455 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jun 25 16:14:39.337516 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jun 25 16:14:39.337576 kernel: libata version 3.00 loaded. Jun 25 16:14:39.337584 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:14:39.337591 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 16:14:39.339748 kernel: ata_piix 0000:00:07.1: version 2.13 Jun 25 16:14:39.344255 kernel: scsi host1: ata_piix Jun 25 16:14:39.344343 kernel: scsi host2: ata_piix Jun 25 16:14:39.344421 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jun 25 16:14:39.344430 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jun 25 16:14:39.363748 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (470) Jun 25 16:14:39.366002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jun 25 16:14:39.368632 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jun 25 16:14:39.370696 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (463) Jun 25 16:14:39.372598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 25 16:14:39.374139 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jun 25 16:14:39.374252 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jun 25 16:14:39.383816 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:14:39.406753 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:14:39.410747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:14:39.515767 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jun 25 16:14:39.519752 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jun 25 16:14:39.550749 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jun 25 16:14:39.574753 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:14:39.574766 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 25 16:14:40.410764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:14:40.411066 disk-uuid[547]: The operation has completed successfully. Jun 25 16:14:40.444410 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:14:40.444676 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:14:40.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.449958 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:14:40.451491 sh[573]: Success Jun 25 16:14:40.459750 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:14:40.507681 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:14:40.508258 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:14:40.508916 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:14:40.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.528556 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:14:40.528588 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:14:40.528596 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:14:40.529648 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:14:40.530447 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:14:40.538751 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 16:14:40.540719 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:14:40.554843 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jun 25 16:14:40.555388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:14:40.571777 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:14:40.571809 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:14:40.571818 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:14:40.574755 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:14:40.580332 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:14:40.582139 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:14:40.589205 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:14:40.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.593243 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:14:40.626507 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 25 16:14:40.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.631009 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:14:40.660869 ignition[632]: Ignition 2.15.0 Jun 25 16:14:40.661142 ignition[632]: Stage: fetch-offline Jun 25 16:14:40.661514 ignition[632]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:14:40.661525 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:14:40.661585 ignition[632]: parsed url from cmdline: "" Jun 25 16:14:40.661586 ignition[632]: no config URL provided Jun 25 16:14:40.661590 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:14:40.661595 ignition[632]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:14:40.661986 ignition[632]: config successfully fetched Jun 25 16:14:40.662006 ignition[632]: parsing config with SHA512: 81b0146f6f0ec5763746c7359349fb23aaacb170a484c6e67d24f97b3ed08f8fd1ff36f836be611cb9892f735cc3440641df8d418a3f9353d7c90f0feb474a0c Jun 25 16:14:40.664443 unknown[632]: fetched base config from "system" Jun 25 16:14:40.664449 unknown[632]: fetched user config from "vmware" Jun 25 16:14:40.664870 ignition[632]: fetch-offline: fetch-offline passed Jun 25 16:14:40.664908 ignition[632]: Ignition finished successfully Jun 25 16:14:40.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.666022 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:14:40.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.681000 audit: BPF prog-id=9 op=LOAD Jun 25 16:14:40.683120 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:14:40.685845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:14:40.697962 systemd-networkd[761]: lo: Link UP Jun 25 16:14:40.697968 systemd-networkd[761]: lo: Gained carrier Jun 25 16:14:40.698228 systemd-networkd[761]: Enumeration completed Jun 25 16:14:40.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.698416 systemd-networkd[761]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jun 25 16:14:40.698507 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:14:40.698667 systemd[1]: Reached target network.target - Network. Jun 25 16:14:40.701660 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 25 16:14:40.701752 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 25 16:14:40.698775 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 16:14:40.701169 systemd-networkd[761]: ens192: Link UP Jun 25 16:14:40.701171 systemd-networkd[761]: ens192: Gained carrier Jun 25 16:14:40.702832 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:14:40.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.703457 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:14:40.708800 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:14:40.709790 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:14:40.711757 iscsid[771]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:14:40.711757 iscsid[771]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:14:40.711757 iscsid[771]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:14:40.711757 iscsid[771]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:14:40.712611 iscsid[771]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:14:40.712611 iscsid[771]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:14:40.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.713525 ignition[763]: Ignition 2.15.0 Jun 25 16:14:40.712929 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:14:40.713530 ignition[763]: Stage: kargs Jun 25 16:14:40.713485 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:14:40.713633 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:14:40.713642 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:14:40.714984 ignition[763]: kargs: kargs passed Jun 25 16:14:40.715010 ignition[763]: Ignition finished successfully Jun 25 16:14:40.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.717260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:14:40.718025 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:14:40.721692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:14:40.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.721865 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:14:40.722064 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:14:40.722248 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:14:40.724836 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:14:40.729799 ignition[776]: Ignition 2.15.0 Jun 25 16:14:40.731046 ignition[776]: Stage: disks Jun 25 16:14:40.731095 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:14:40.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.731401 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:14:40.731530 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:14:40.732325 ignition[776]: disks: disks passed Jun 25 16:14:40.732457 ignition[776]: Ignition finished successfully Jun 25 16:14:40.733063 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:14:40.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.733218 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:14:40.733349 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:14:40.733534 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:14:40.733723 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:14:40.733945 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:14:40.737016 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:14:40.747007 systemd-fsck[795]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 16:14:40.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.748584 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:14:40.751935 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:14:40.798774 systemd-resolved[228]: Detected conflict on linux IN A 139.178.70.105 Jun 25 16:14:40.798783 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jun 25 16:14:40.808621 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:14:40.808832 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:14:40.808787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:14:40.820842 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:14:40.822313 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:14:40.822704 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:14:40.822743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:14:40.822758 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:14:40.824732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:14:40.825280 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:14:40.830748 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (801) Jun 25 16:14:40.833650 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:14:40.833667 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:14:40.833674 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:14:40.837752 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:14:40.838554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:14:40.854819 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:14:40.857124 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:14:40.859384 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:14:40.861385 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:14:40.921041 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:14:40.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.923819 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:14:40.924303 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:14:40.928078 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:14:40.929755 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:14:40.939992 ignition[912]: INFO : Ignition 2.15.0 Jun 25 16:14:40.940249 ignition[912]: INFO : Stage: mount Jun 25 16:14:40.940425 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:14:40.940558 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:14:40.941281 ignition[912]: INFO : mount: mount passed Jun 25 16:14:40.941417 ignition[912]: INFO : Ignition finished successfully Jun 25 16:14:40.942117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:14:40.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.945835 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:14:40.950118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:14:40.954227 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:14:40.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:40.964931 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (921) Jun 25 16:14:40.964959 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:14:40.966837 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:14:40.966853 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:14:40.972101 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 25 16:14:40.971256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:14:40.984143 ignition[940]: INFO : Ignition 2.15.0 Jun 25 16:14:40.984143 ignition[940]: INFO : Stage: files Jun 25 16:14:40.984485 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:14:40.984485 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:14:40.984874 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:14:40.986615 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:14:40.986615 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:14:40.988475 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:14:40.988630 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:14:40.988779 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:14:40.988710 unknown[940]: wrote ssh authorized keys file for user: core Jun 25 16:14:40.990123 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:14:40.990302 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:14:40.990302 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:14:40.990302 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:14:41.019083 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 16:14:41.104369 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:14:41.104642 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:14:41.104958 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:14:41.105156 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:14:41.105417 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:14:41.105594 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:14:41.105845 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:14:41.106032 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:14:41.106263 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:14:41.106540 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:14:41.106847 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:14:41.107034 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:14:41.107305 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:14:41.107528 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:14:41.107780 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:14:41.552655 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 16:14:41.784983 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:14:41.785280 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 25 16:14:41.785641 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 25 16:14:41.785854 ignition[940]: INFO : files: op(d): [started] processing unit "containerd.service" Jun 25 16:14:41.788556 ignition[940]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:14:41.788911 ignition[940]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:14:41.789133 ignition[940]: INFO : files: op(d): [finished] processing unit "containerd.service" Jun 25 16:14:41.789269 ignition[940]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jun 25 16:14:41.789429 ignition[940]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:14:41.789687 ignition[940]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:14:41.789879 ignition[940]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jun 25 16:14:41.790030 ignition[940]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jun 25 16:14:41.790193 ignition[940]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:14:41.790443 ignition[940]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:14:41.790634 ignition[940]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jun 25 16:14:41.790790 ignition[940]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 16:14:41.790939 ignition[940]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:14:41.827127 ignition[940]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:14:41.827446 ignition[940]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 16:14:41.827613 ignition[940]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:14:41.827802 ignition[940]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:14:41.828069 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:14:41.828315 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:14:41.828503 ignition[940]: INFO : files: files passed Jun 25 16:14:41.828642 ignition[940]: INFO : Ignition finished successfully Jun 25 16:14:41.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.829423 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:14:41.834893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:14:41.835529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:14:41.838033 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:14:41.838089 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:14:41.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.841113 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:14:41.841391 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:14:41.842199 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:14:41.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.842992 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:14:41.843141 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:14:41.845893 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:14:41.854041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:14:41.854108 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:14:41.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.854359 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:14:41.854477 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:14:41.854687 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:14:41.855147 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:14:41.862074 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:14:41.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.864869 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:14:41.870122 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:14:41.870463 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:14:41.870768 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:14:41.871031 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:14:41.871234 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:14:41.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.871780 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:14:41.872054 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:14:41.872314 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:14:41.872600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:14:41.872898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:14:41.873266 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:14:41.873651 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:14:41.874105 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:14:41.874498 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:14:41.874896 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:14:41.875291 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:14:41.875549 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:14:41.875823 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:14:41.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.876300 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:14:41.876634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:14:41.876904 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:14:41.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.877362 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:14:41.877633 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:14:41.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.878176 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:14:41.878505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:14:41.882785 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:14:41.883250 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:14:41.883543 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:14:41.883832 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:14:41.884015 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:14:41.884337 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:14:41.884559 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:14:41.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.884946 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:14:41.885145 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:14:41.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.897099 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:14:41.897518 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:14:41.897762 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:14:41.897962 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:14:41.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.899097 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:14:41.899439 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:14:41.899839 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:14:41.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.900398 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:14:41.900700 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:14:41.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.901919 systemd-networkd[761]: ens192: Gained IPv6LL Jun 25 16:14:41.903642 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:14:41.903963 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:14:41.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.904767 systemd[1]: Stopped target network.target - Network. Jun 25 16:14:41.905155 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:14:41.905349 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:14:41.905794 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:14:41.905980 ignition[986]: INFO : Ignition 2.15.0 Jun 25 16:14:41.905980 ignition[986]: INFO : Stage: umount Jun 25 16:14:41.906330 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:14:41.906330 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 25 16:14:41.906810 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:14:41.907208 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:14:41.907398 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:14:41.907544 ignition[986]: INFO : umount: umount passed Jun 25 16:14:41.907544 ignition[986]: INFO : Ignition finished successfully Jun 25 16:14:41.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.912039 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:14:41.912311 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:14:41.912896 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:14:41.913133 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:14:41.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.914051 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:14:41.914202 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:14:41.914472 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:14:41.914627 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:14:41.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.914904 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:14:41.915048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:14:41.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.915312 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:14:41.915460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:14:41.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.916151 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:14:41.916449 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:14:41.916605 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:14:41.916902 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jun 25 16:14:41.917055 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 25 16:14:41.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.917362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:14:41.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.917546 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:14:41.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.917933 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:14:41.918128 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:14:41.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.917000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:14:41.919907 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:14:41.920713 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:14:41.921003 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:14:41.921052 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:14:41.923615 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:14:41.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.927271 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:14:41.927538 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:14:41.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.926000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:14:41.928167 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:14:41.928378 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:14:41.928694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:14:41.928883 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:14:41.929382 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:14:41.929535 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:14:41.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.929824 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:14:41.929975 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:14:41.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.930243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:14:41.930392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:14:41.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.931139 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:14:41.931452 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:14:41.931607 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:14:41.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.931960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:14:41.932118 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:14:41.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.932988 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:14:41.933420 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:14:41.933603 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:14:41.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.934644 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:14:41.934843 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:14:41.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.978860 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:14:41.978918 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:14:41.979167 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:14:41.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.979270 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:14:41.979295 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:14:41.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:41.981872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:14:42.010367 systemd[1]: Switching root. Jun 25 16:14:42.009000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:14:42.009000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:14:42.010000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:14:42.010000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:14:42.010000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:14:42.026258 iscsid[771]: iscsid shutting down. Jun 25 16:14:42.026428 systemd-journald[212]: Journal stopped Jun 25 16:14:43.204217 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Jun 25 16:14:43.204236 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:14:43.204244 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:14:43.204250 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:14:43.204255 kernel: SELinux: policy capability open_perms=1 Jun 25 16:14:43.204261 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:14:43.204268 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:14:43.204274 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:14:43.204279 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:14:43.204285 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:14:43.204290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:14:43.204296 systemd[1]: Successfully loaded SELinux policy in 34.938ms. Jun 25 16:14:43.204303 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.808ms. Jun 25 16:14:43.204310 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:14:43.204318 systemd[1]: Detected virtualization vmware. Jun 25 16:14:43.204323 systemd[1]: Detected architecture x86-64. Jun 25 16:14:43.204329 systemd[1]: Detected first boot. Jun 25 16:14:43.204337 systemd[1]: Initializing machine ID from random generator. Jun 25 16:14:43.204343 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:14:43.204350 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:14:43.204357 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jun 25 16:14:43.204363 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:14:43.204370 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 16:14:43.204376 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:14:43.204382 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:14:43.204390 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:14:43.204397 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:14:43.204403 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:14:43.204409 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:14:43.204416 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:14:43.204422 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:14:43.204428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:14:43.204436 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:14:43.204442 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:14:43.204448 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:14:43.204454 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:14:43.204461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:14:43.204467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:14:43.204473 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:14:43.204479 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:14:43.204487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:14:43.204495 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:14:43.204501 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:14:43.204508 kernel: kauditd_printk_skb: 77 callbacks suppressed Jun 25 16:14:43.204515 kernel: audit: type=1400 audit(1719332083.107:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jun 25 16:14:43.204522 kernel: audit: type=1335 audit(1719332083.110:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 16:14:43.204530 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:14:43.205018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:14:43.205032 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:14:43.205040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:14:43.205047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:14:43.205053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:14:43.205060 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:14:43.205066 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:14:43.205076 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:14:43.205083 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:14:43.205090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:43.205096 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:14:43.205103 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:14:43.205112 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:14:43.205142 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:14:43.205150 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jun 25 16:14:43.205157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:14:43.205165 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:14:43.205172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:14:43.205179 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:14:43.205186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:14:43.205193 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:14:43.205201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:14:43.205208 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:14:43.205215 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 16:14:43.205222 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jun 25 16:14:43.205229 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:14:43.205236 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:14:43.205242 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:14:43.205249 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:14:43.205256 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:14:43.205264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:43.205271 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:14:43.205278 kernel: audit: type=1305 audit(1719332083.189:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:14:43.205285 kernel: audit: type=1300 audit(1719332083.189:90): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc2f8bc90 a2=4000 a3=7ffdc2f8bd2c items=0 ppid=1 pid=1133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:14:43.205291 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:14:43.205298 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:14:43.205306 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:14:43.205314 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:14:43.205321 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:14:43.205328 kernel: audit: type=1327 audit(1719332083.189:90): proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:14:43.205337 systemd-journald[1133]: Journal started Jun 25 16:14:43.205366 systemd-journald[1133]: Runtime Journal (/run/log/journal/9811a8086af24eb9a930d1a68660e165) is 4.8M, max 38.7M, 33.9M free. Jun 25 16:14:43.189000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:14:43.189000 audit[1133]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc2f8bc90 a2=4000 a3=7ffdc2f8bd2c items=0 ppid=1 pid=1133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:14:43.189000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:14:43.205922 jq[1112]: true Jun 25 16:14:43.208132 jq[1147]: true Jun 25 16:14:43.218895 kernel: fuse: init (API version 7.37) Jun 25 16:14:43.227300 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:14:43.227344 kernel: audit: type=1130 audit(1719332083.220:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.227356 kernel: audit: type=1130 audit(1719332083.220:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.227364 kernel: audit: type=1130 audit(1719332083.221:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.227375 kernel: audit: type=1131 audit(1719332083.221:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.227383 kernel: audit: type=1130 audit(1719332083.221:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.222481 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:14:43.222707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:14:43.222804 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:14:43.223029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:14:43.223111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:14:43.223329 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:14:43.223405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:14:43.223621 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:14:43.223692 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:14:43.235994 kernel: loop: module loaded Jun 25 16:14:43.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.238135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:14:43.238406 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:14:43.238655 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:14:43.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.241996 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:14:43.242101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:14:43.242629 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:14:43.243941 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:14:43.245139 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:14:43.245269 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:14:43.249899 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:14:43.262398 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:14:43.262633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:14:43.264458 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:14:43.276638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:14:43.286175 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:14:43.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.288604 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:14:43.288832 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:14:43.288976 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:14:43.290320 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:14:43.298333 kernel: ACPI: bus type drm_connector registered Jun 25 16:14:43.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.296048 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:14:43.296168 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:14:43.317923 systemd-journald[1133]: Time spent on flushing to /var/log/journal/9811a8086af24eb9a930d1a68660e165 is 33.283ms for 1940 entries. Jun 25 16:14:43.317923 systemd-journald[1133]: System Journal (/var/log/journal/9811a8086af24eb9a930d1a68660e165) is 8.0M, max 584.8M, 576.8M free. Jun 25 16:14:43.391499 systemd-journald[1133]: Received client request to flush runtime journal. Jun 25 16:14:43.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.323302 ignition[1167]: Ignition 2.15.0 Jun 25 16:14:43.354189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:14:43.323767 ignition[1167]: deleting config from guestinfo properties Jun 25 16:14:43.358920 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:14:43.380097 ignition[1167]: Successfully deleted config Jun 25 16:14:43.359313 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:14:43.359513 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:14:43.366247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:14:43.376440 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:14:43.378862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:14:43.381625 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jun 25 16:14:43.392090 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:14:43.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.394769 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:14:43.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.400850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:14:43.857714 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:14:43.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.861895 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:14:43.874665 systemd-udevd[1201]: Using default interface naming scheme 'v252'. Jun 25 16:14:43.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.889702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:14:43.894866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:14:43.904834 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:14:43.928997 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 16:14:43.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:43.944886 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:14:43.967086 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 16:14:43.970748 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:14:44.001351 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1210) Jun 25 16:14:44.006520 systemd-networkd[1209]: lo: Link UP Jun 25 16:14:44.006700 systemd-networkd[1209]: lo: Gained carrier Jun 25 16:14:44.007013 systemd-networkd[1209]: Enumeration completed Jun 25 16:14:44.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:44.007115 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:14:44.008761 systemd-networkd[1209]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jun 25 16:14:44.011601 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 25 16:14:44.011744 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 25 16:14:44.011129 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:14:44.015746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jun 25 16:14:44.016105 systemd-networkd[1209]: ens192: Link UP Jun 25 16:14:44.016362 systemd-networkd[1209]: ens192: Gained carrier Jun 25 16:14:44.023762 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jun 25 16:14:44.044834 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jun 25 16:14:44.069863 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1206) Jun 25 16:14:44.069880 kernel: Guest personality initialized and is active Jun 25 16:14:44.069899 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jun 25 16:14:44.072779 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 25 16:14:44.072825 kernel: Initialized host personality Jun 25 16:14:44.079184 (udev-worker)[1214]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 25 16:14:44.084786 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:14:44.097363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 25 16:14:44.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:44.145006 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:14:44.153065 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:14:44.161286 lvm[1240]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:14:44.184438 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:14:44.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:44.184652 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:14:44.187889 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:14:44.190749 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:14:44.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:44.213388 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:14:44.213581 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:14:44.213692 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:14:44.213704 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:14:44.213971 systemd[1]: Reached target machines.target - Containers. Jun 25 16:14:44.221004 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:14:44.221200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:14:44.221254 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:14:44.222426 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:14:44.223373 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:14:44.224555 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:14:44.225727 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:14:44.233563 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1247 (bootctl) Jun 25 16:14:44.236885 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:14:44.247753 kernel: loop0: detected capacity change from 0 to 209816 Jun 25 16:14:44.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:44.249493 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:14:44.695642 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:14:44.696421 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:14:44.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:44.839756 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:14:44.945758 kernel: loop1: detected capacity change from 0 to 139360 Jun 25 16:14:45.065211 systemd-fsck[1255]: fsck.fat 4.2 (2021-01-31) Jun 25 16:14:45.065211 systemd-fsck[1255]: /dev/sda1: 808 files, 120378/258078 clusters Jun 25 16:14:45.066953 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:14:45.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.070967 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:14:45.089490 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:14:45.091747 kernel: loop2: detected capacity change from 0 to 3000 Jun 25 16:14:45.100776 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:14:45.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.124761 kernel: loop3: detected capacity change from 0 to 80584 Jun 25 16:14:45.181337 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 16:14:45.218760 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:14:45.249759 kernel: loop6: detected capacity change from 0 to 3000 Jun 25 16:14:45.287758 kernel: loop7: detected capacity change from 0 to 80584 Jun 25 16:14:45.317441 (sd-sysext)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jun 25 16:14:45.317720 (sd-sysext)[1267]: Merged extensions into '/usr'. Jun 25 16:14:45.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.318752 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:14:45.323862 systemd[1]: Starting ensure-sysext.service... Jun 25 16:14:45.325111 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:14:45.340845 systemd[1]: Reloading. Jun 25 16:14:45.345892 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:14:45.347455 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:14:45.347952 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:14:45.349561 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:14:45.449583 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:14:45.466759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:14:45.508722 ldconfig[1245]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:14:45.518677 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:14:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.523260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:14:45.525327 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:14:45.530425 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:14:45.531869 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:14:45.533345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:14:45.535461 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:14:45.537113 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:14:45.543000 audit[1366]: SYSTEM_BOOT pid=1366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.547352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:45.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.553035 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:14:45.554192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:14:45.555401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:14:45.555588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:14:45.555700 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:14:45.555846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:45.556651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:14:45.556776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:14:45.557271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:14:45.557384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:14:45.557788 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:14:45.557875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:14:45.558268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:14:45.558369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:14:45.565061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:45.568005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:14:45.569298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:14:45.570632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:14:45.570818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:14:45.570904 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:14:45.570977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:45.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.573083 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:14:45.573505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:14:45.573591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:14:45.575570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:14:45.575674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:14:45.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.576272 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:14:45.578773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:45.582221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:14:45.587613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:14:45.588917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:14:45.589167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:14:45.589252 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:14:45.589349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:14:45.592180 systemd[1]: Finished ensure-sysext.service. Jun 25 16:14:45.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.592566 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:14:45.592684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:14:45.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.600498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:14:45.600597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:14:45.600812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:14:45.602802 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:14:45.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.607874 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:14:45.616523 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:14:45.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.621435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:14:45.621549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:14:45.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.621815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:14:45.622640 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:14:45.622915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:14:45.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.644209 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:14:45.646468 systemd-resolved[1362]: Positive Trust Anchors: Jun 25 16:14:45.646476 systemd-resolved[1362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:14:45.646496 systemd-resolved[1362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:14:45.646808 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:14:45.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.651728 systemd-resolved[1362]: Defaulting to hostname 'linux'. Jun 25 16:14:45.653288 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:14:45.653532 systemd[1]: Reached target network.target - Network. Jun 25 16:14:45.653631 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:14:45.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:14:45.652000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:14:45.652000 audit[1403]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7c943240 a2=420 a3=0 items=0 ppid=1357 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:14:45.652000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:14:45.654268 augenrules[1403]: No rules Jun 25 16:14:45.654556 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:14:45.658756 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:14:45.658990 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:14:45.659148 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:14:45.659276 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:14:45.659415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:14:45.659539 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:14:45.659557 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:14:45.659687 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:14:45.659907 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:14:45.660072 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:14:45.660179 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:14:45.660556 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:14:45.661955 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:14:45.662708 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:14:45.662932 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:14:45.668467 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:14:45.668621 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:14:45.668728 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:14:45.668946 systemd[1]: System is tainted: cgroupsv1 Jun 25 16:14:45.668973 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:14:45.668985 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:14:45.670114 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:14:45.672281 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:14:45.673856 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:14:45.676218 jq[1416]: false Jun 25 16:14:45.677153 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:14:45.677316 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:14:45.678640 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:14:45.678967 systemd-networkd[1209]: ens192: Gained IPv6LL Jun 25 16:14:45.680869 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:14:45.682352 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:14:45.683876 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:14:45.692483 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:14:45.692660 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:14:45.692702 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:14:45.695852 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:14:45.697167 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:14:45.702095 jq[1431]: true Jun 25 16:14:45.698539 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jun 25 16:14:45.699327 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:14:45.700117 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:14:45.700260 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:14:45.700928 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:14:45.701057 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:14:45.708246 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:14:45.713646 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jun 25 16:14:45.741823 extend-filesystems[1418]: Found loop4 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found loop5 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found loop6 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found loop7 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda1 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda2 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda3 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found usr Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda4 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda6 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda7 Jun 25 16:14:45.741823 extend-filesystems[1418]: Found sda9 Jun 25 16:14:45.741823 extend-filesystems[1418]: Checking size of /dev/sda9 Jun 25 16:14:45.753782 update_engine[1430]: I0625 16:14:45.719486 1430 main.cc:92] Flatcar Update Engine starting Jun 25 16:14:45.719879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:14:45.721265 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:14:45.733408 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jun 25 16:14:45.754203 jq[1437]: true Jun 25 16:14:45.734980 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jun 25 16:14:45.749907 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:14:45.750067 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:14:45.761800 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jun 25 16:14:45.781889 tar[1435]: linux-amd64/helm Jun 25 16:14:45.784517 extend-filesystems[1418]: Old size kept for /dev/sda9 Jun 25 16:14:45.784517 extend-filesystems[1418]: Found sr0 Jun 25 16:14:45.783935 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:14:45.784078 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:14:45.795596 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:14:45.804568 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1217) Jun 25 16:14:45.810089 systemd-logind[1426]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:14:45.810305 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:14:45.812806 systemd-logind[1426]: New seat seat0. Jun 25 16:14:45.820889 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 16:14:45.821029 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jun 25 16:14:45.839907 update_engine[1430]: I0625 16:14:45.832568 1430 update_check_scheduler.cc:74] Next update check in 5m44s Jun 25 16:14:45.821280 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:14:45.831011 dbus-daemon[1415]: [system] SELinux support is enabled Jun 25 16:14:45.831135 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:14:45.833105 dbus-daemon[1415]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 16:14:45.832688 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:14:45.832706 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:14:45.832851 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:14:45.832862 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:14:45.832997 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:14:45.833111 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:14:45.834617 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:14:45.838988 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:14:45.842175 unknown[1448]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jun 25 16:14:45.850552 unknown[1448]: Core dump limit set to -1 Jun 25 16:15:40.775811 systemd-resolved[1362]: Clock change detected. Flushing caches. Jun 25 16:15:40.776095 systemd-timesyncd[1365]: Contacted time server 204.17.205.27:123 (0.flatcar.pool.ntp.org). Jun 25 16:15:40.776227 systemd-timesyncd[1365]: Initial clock synchronization to Tue 2024-06-25 16:15:40.775776 UTC. Jun 25 16:15:40.792639 kernel: NET: Registered PF_VSOCK protocol family Jun 25 16:15:40.799558 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:15:40.803899 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:15:40.804489 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:15:40.882881 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:15:41.092037 containerd[1445]: time="2024-06-25T16:15:41.091956675Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:15:41.124414 containerd[1445]: time="2024-06-25T16:15:41.124328128Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:15:41.124414 containerd[1445]: time="2024-06-25T16:15:41.124359718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.126744563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.126765361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.126928428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.126938554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.126986734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.127015846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.127023385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.127058830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.127172527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.127182502Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:15:41.127374 containerd[1445]: time="2024-06-25T16:15:41.127188045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127558 containerd[1445]: time="2024-06-25T16:15:41.127262823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:15:41.127558 containerd[1445]: time="2024-06-25T16:15:41.127271181Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:15:41.127558 containerd[1445]: time="2024-06-25T16:15:41.127297401Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:15:41.127558 containerd[1445]: time="2024-06-25T16:15:41.127304878Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135186280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135220388Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135232267Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135252925Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135263830Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135272392Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135284194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135376465Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135390611Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135398220Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135406815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135414787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135424833Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136064 containerd[1445]: time="2024-06-25T16:15:41.135432867Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135439383Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135446767Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135454228Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135461509Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135468133Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135525968Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135892207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135933838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135953818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.135975194Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.136029539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.136043382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.136053925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136348 containerd[1445]: time="2024-06-25T16:15:41.136063191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136073308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136083071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136094006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136103067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136113820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136205753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136224363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136236854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136252779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136264144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136276032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136285262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136563 containerd[1445]: time="2024-06-25T16:15:41.136296677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:15:41.136794 containerd[1445]: time="2024-06-25T16:15:41.136481323Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:15:41.136794 containerd[1445]: time="2024-06-25T16:15:41.136528377Z" level=info msg="Connect containerd service" Jun 25 16:15:41.136794 containerd[1445]: time="2024-06-25T16:15:41.136564253Z" level=info msg="using legacy CRI server" Jun 25 16:15:41.136794 containerd[1445]: time="2024-06-25T16:15:41.136570068Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:15:41.136794 containerd[1445]: time="2024-06-25T16:15:41.136588576Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:15:41.137113 containerd[1445]: time="2024-06-25T16:15:41.137090150Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:15:41.137140 containerd[1445]: time="2024-06-25T16:15:41.137127733Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:15:41.137158 containerd[1445]: time="2024-06-25T16:15:41.137143203Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:15:41.137158 containerd[1445]: time="2024-06-25T16:15:41.137153783Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:15:41.137192 containerd[1445]: time="2024-06-25T16:15:41.137167621Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:15:41.137457 containerd[1445]: time="2024-06-25T16:15:41.137439241Z" level=info msg="Start subscribing containerd event" Jun 25 16:15:41.137511 containerd[1445]: time="2024-06-25T16:15:41.137502855Z" level=info msg="Start recovering state" Jun 25 16:15:41.147631 containerd[1445]: time="2024-06-25T16:15:41.145316127Z" level=info msg="Start event monitor" Jun 25 16:15:41.147631 containerd[1445]: time="2024-06-25T16:15:41.145354433Z" level=info msg="Start snapshots syncer" Jun 25 16:15:41.147631 containerd[1445]: time="2024-06-25T16:15:41.145366919Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:15:41.147631 containerd[1445]: time="2024-06-25T16:15:41.145374661Z" level=info msg="Start streaming server" Jun 25 16:15:41.147631 containerd[1445]: time="2024-06-25T16:15:41.145451131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:15:41.147631 containerd[1445]: time="2024-06-25T16:15:41.145483032Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:15:41.147631 containerd[1445]: time="2024-06-25T16:15:41.145952608Z" level=info msg="containerd successfully booted in 0.054694s" Jun 25 16:15:41.145606 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:15:41.307772 tar[1435]: linux-amd64/LICENSE Jun 25 16:15:41.307910 tar[1435]: linux-amd64/README.md Jun 25 16:15:41.316799 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:15:41.690207 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:15:41.703915 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:15:41.708847 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:15:41.712407 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:15:41.712535 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:15:41.713948 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:15:41.724073 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:15:41.728916 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:15:41.730139 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:15:41.730367 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:15:42.103075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:15:42.103485 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:15:42.104901 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:15:42.109318 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:15:42.109453 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:15:42.109728 systemd[1]: Startup finished in 4.636s (kernel) + 4.937s (userspace) = 9.574s. Jun 25 16:15:42.154652 login[1582]: pam_lastlog(login:session): file /var/log/lastlog is locked/read Jun 25 16:15:42.156639 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:15:42.161701 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:15:42.165798 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:15:42.168303 systemd-logind[1426]: New session 1 of user core. Jun 25 16:15:42.173429 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:15:42.174468 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:15:42.178106 (systemd)[1594]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:15:42.231041 systemd[1594]: Queued start job for default target default.target. Jun 25 16:15:42.231400 systemd[1594]: Reached target paths.target - Paths. Jun 25 16:15:42.231414 systemd[1594]: Reached target sockets.target - Sockets. Jun 25 16:15:42.231421 systemd[1594]: Reached target timers.target - Timers. Jun 25 16:15:42.231428 systemd[1594]: Reached target basic.target - Basic System. Jun 25 16:15:42.231504 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:15:42.232074 systemd[1594]: Reached target default.target - Main User Target. Jun 25 16:15:42.232099 systemd[1594]: Startup finished in 50ms. Jun 25 16:15:42.235873 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:15:43.030020 kubelet[1589]: E0625 16:15:43.029973 1589 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:15:43.031675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:15:43.031777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:15:43.154970 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:15:43.158094 systemd-logind[1426]: New session 2 of user core. Jun 25 16:15:43.165758 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:15:53.282388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:15:53.282547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:15:53.289977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:15:53.355159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:15:53.383187 kubelet[1631]: E0625 16:15:53.383162 1631 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:15:53.385491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:15:53.385577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:16:03.636250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:16:03.636405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:03.643845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:03.919046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:03.969800 kubelet[1646]: E0625 16:16:03.969757 1646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:16:03.971118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:16:03.971202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:16:14.221829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:16:14.221982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:14.231830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:14.505701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:14.590773 kubelet[1661]: E0625 16:16:14.590746 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:16:14.592097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:16:14.592182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:16:20.867820 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:16:20.883871 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:35296.service - OpenSSH per-connection server daemon (139.178.68.195:35296). Jun 25 16:16:20.917404 sshd[1668]: Accepted publickey for core from 139.178.68.195 port 35296 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:16:20.918622 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:16:20.921527 systemd-logind[1426]: New session 3 of user core. Jun 25 16:16:20.933823 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:16:20.985905 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:35306.service - OpenSSH per-connection server daemon (139.178.68.195:35306). Jun 25 16:16:21.023709 sshd[1673]: Accepted publickey for core from 139.178.68.195 port 35306 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:16:21.024568 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:16:21.027244 systemd-logind[1426]: New session 4 of user core. Jun 25 16:16:21.030770 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:16:21.081823 sshd[1673]: pam_unix(sshd:session): session closed for user core Jun 25 16:16:21.087838 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:35310.service - OpenSSH per-connection server daemon (139.178.68.195:35310). Jun 25 16:16:21.088136 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:35306.service: Deactivated successfully. Jun 25 16:16:21.090880 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:16:21.090921 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:16:21.091772 systemd-logind[1426]: Removed session 4. Jun 25 16:16:21.119270 sshd[1678]: Accepted publickey for core from 139.178.68.195 port 35310 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:16:21.119903 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:16:21.122648 systemd-logind[1426]: New session 5 of user core. Jun 25 16:16:21.124767 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:16:21.171524 sshd[1678]: pam_unix(sshd:session): session closed for user core Jun 25 16:16:21.175922 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:35326.service - OpenSSH per-connection server daemon (139.178.68.195:35326). Jun 25 16:16:21.176439 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:35310.service: Deactivated successfully. Jun 25 16:16:21.177145 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:16:21.177456 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:16:21.177991 systemd-logind[1426]: Removed session 5. Jun 25 16:16:21.204600 sshd[1685]: Accepted publickey for core from 139.178.68.195 port 35326 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:16:21.205337 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:16:21.208377 systemd-logind[1426]: New session 6 of user core. Jun 25 16:16:21.210766 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:16:21.259884 sshd[1685]: pam_unix(sshd:session): session closed for user core Jun 25 16:16:21.271940 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:35330.service - OpenSSH per-connection server daemon (139.178.68.195:35330). Jun 25 16:16:21.272208 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:35326.service: Deactivated successfully. Jun 25 16:16:21.272760 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:16:21.272828 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:16:21.273480 systemd-logind[1426]: Removed session 6. Jun 25 16:16:21.301854 sshd[1692]: Accepted publickey for core from 139.178.68.195 port 35330 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:16:21.302583 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:16:21.305167 systemd-logind[1426]: New session 7 of user core. Jun 25 16:16:21.311818 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:16:21.368904 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:16:21.369107 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:16:21.379286 sudo[1698]: pam_unix(sudo:session): session closed for user root Jun 25 16:16:21.382753 sshd[1692]: pam_unix(sshd:session): session closed for user core Jun 25 16:16:21.390870 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:35346.service - OpenSSH per-connection server daemon (139.178.68.195:35346). Jun 25 16:16:21.391221 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:35330.service: Deactivated successfully. Jun 25 16:16:21.392037 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:16:21.392067 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:16:21.393003 systemd-logind[1426]: Removed session 7. Jun 25 16:16:21.424889 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 35346 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:16:21.425644 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:16:21.428205 systemd-logind[1426]: New session 8 of user core. Jun 25 16:16:21.430762 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:16:21.479491 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:16:21.479837 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:16:21.481725 sudo[1707]: pam_unix(sudo:session): session closed for user root Jun 25 16:16:21.484479 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:16:21.484624 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:16:21.495806 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:16:21.502799 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:16:21.502855 kernel: audit: type=1305 audit(1719332181.496:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:16:21.502873 kernel: audit: type=1300 audit(1719332181.496:156): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9fe781c0 a2=420 a3=0 items=0 ppid=1 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.502887 kernel: audit: type=1327 audit(1719332181.496:156): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:16:21.502897 kernel: audit: type=1131 audit(1719332181.496:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.496000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:16:21.496000 audit[1710]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9fe781c0 a2=420 a3=0 items=0 ppid=1 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.496000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:16:21.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.503037 auditctl[1710]: No rules Jun 25 16:16:21.497054 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:16:21.497174 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:16:21.498458 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:16:21.514182 augenrules[1728]: No rules Jun 25 16:16:21.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.514720 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:16:21.515334 sudo[1706]: pam_unix(sudo:session): session closed for user root Jun 25 16:16:21.519438 kernel: audit: type=1130 audit(1719332181.514:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.519476 kernel: audit: type=1106 audit(1719332181.514:159): pid=1706 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.514000 audit[1706]: USER_END pid=1706 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.519940 sshd[1700]: pam_unix(sshd:session): session closed for user core Jun 25 16:16:21.514000 audit[1706]: CRED_DISP pid=1706 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.520000 audit[1700]: USER_END pid=1700 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.522828 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:35348.service - OpenSSH per-connection server daemon (139.178.68.195:35348). Jun 25 16:16:21.523095 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:35346.service: Deactivated successfully. Jun 25 16:16:21.523923 kernel: audit: type=1104 audit(1719332181.514:160): pid=1706 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.523949 kernel: audit: type=1106 audit(1719332181.520:161): pid=1700 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.520000 audit[1700]: CRED_DISP pid=1700 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.524104 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:16:21.524526 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:16:21.525788 kernel: audit: type=1104 audit(1719332181.520:162): pid=1700 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:35348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.105:22-139.178.68.195:35346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.526754 systemd-logind[1426]: Removed session 8. Jun 25 16:16:21.528688 kernel: audit: type=1130 audit(1719332181.522:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:35348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.553000 audit[1733]: USER_ACCT pid=1733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.554258 sshd[1733]: Accepted publickey for core from 139.178.68.195 port 35348 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:16:21.554000 audit[1733]: CRED_ACQ pid=1733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.554000 audit[1733]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9985d400 a2=3 a3=7f2e6f84b480 items=0 ppid=1 pid=1733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.554000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:16:21.555174 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:16:21.557573 systemd-logind[1426]: New session 9 of user core. Jun 25 16:16:21.560757 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:16:21.562000 audit[1733]: USER_START pid=1733 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.563000 audit[1738]: CRED_ACQ pid=1738 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:21.608000 audit[1739]: USER_ACCT pid=1739 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.609470 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:16:21.609000 audit[1739]: CRED_REFR pid=1739 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.609845 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:16:21.610000 audit[1739]: USER_START pid=1739 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:21.694815 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:16:21.913585 dockerd[1748]: time="2024-06-25T16:16:21.913549164Z" level=info msg="Starting up" Jun 25 16:16:21.922755 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1374440693-merged.mount: Deactivated successfully. Jun 25 16:16:21.931524 systemd[1]: var-lib-docker-metacopy\x2dcheck1792973426-merged.mount: Deactivated successfully. Jun 25 16:16:21.939727 dockerd[1748]: time="2024-06-25T16:16:21.939680306Z" level=info msg="Loading containers: start." Jun 25 16:16:21.988000 audit[1780]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:21.988000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd52913430 a2=0 a3=7f187eb2ce90 items=0 ppid=1748 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.988000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:16:21.990000 audit[1782]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:21.990000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd33ffb3c0 a2=0 a3=7fa220287e90 items=0 ppid=1748 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.990000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:16:21.991000 audit[1784]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:21.991000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff15335ce0 a2=0 a3=7fbc2a37ce90 items=0 ppid=1748 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.991000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:16:21.992000 audit[1786]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:21.992000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcb1242d70 a2=0 a3=7f963146de90 items=0 ppid=1748 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.992000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:16:21.994000 audit[1788]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:21.994000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffbfac4670 a2=0 a3=7f1e3bbcde90 items=0 ppid=1748 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.994000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:16:21.995000 audit[1790]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:21.995000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea8b1e860 a2=0 a3=7fb63813de90 items=0 ppid=1748 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:21.995000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:16:22.000000 audit[1792]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.000000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff7b45fc90 a2=0 a3=7ff375c8ee90 items=0 ppid=1748 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.000000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:16:22.001000 audit[1794]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.001000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdd7783ee0 a2=0 a3=7ff892ac4e90 items=0 ppid=1748 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.001000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:16:22.002000 audit[1796]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.002000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd2f0c65b0 a2=0 a3=7f7715589e90 items=0 ppid=1748 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.002000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:16:22.006000 audit[1800]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.006000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe2e447e00 a2=0 a3=7fb89a605e90 items=0 ppid=1748 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.006000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:16:22.007000 audit[1801]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.007000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc175c1af0 a2=0 a3=7f41fa792e90 items=0 ppid=1748 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.007000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:16:22.015634 kernel: Initializing XFRM netlink socket Jun 25 16:16:22.043000 audit[1809]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.043000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe348f8210 a2=0 a3=7fc7dabf5e90 items=0 ppid=1748 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.043000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:16:22.060000 audit[1812]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.060000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffec1695250 a2=0 a3=7f9a3eeb1e90 items=0 ppid=1748 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.060000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:16:22.063000 audit[1816]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.063000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc6ead6c50 a2=0 a3=7f12c59e4e90 items=0 ppid=1748 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.063000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:16:22.064000 audit[1818]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.064000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffde53dc6a0 a2=0 a3=7f5f38231e90 items=0 ppid=1748 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.064000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:16:22.066000 audit[1820]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.066000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe51ac4eb0 a2=0 a3=7f4a9dc3ee90 items=0 ppid=1748 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.066000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:16:22.067000 audit[1822]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.067000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffd33f7810 a2=0 a3=7fd66fbbde90 items=0 ppid=1748 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.067000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:16:22.068000 audit[1824]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.068000 audit[1824]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffa45029c0 a2=0 a3=7f6a137fee90 items=0 ppid=1748 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.068000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:16:22.072000 audit[1827]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.072000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe47de0850 a2=0 a3=7f85b13ebe90 items=0 ppid=1748 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.072000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:16:22.073000 audit[1829]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.073000 audit[1829]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffda2da5d50 a2=0 a3=7f7d11263e90 items=0 ppid=1748 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.073000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:16:22.075000 audit[1831]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.075000 audit[1831]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffede4f5ee0 a2=0 a3=7f82f89c4e90 items=0 ppid=1748 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.075000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:16:22.076000 audit[1833]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.076000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf8d01d80 a2=0 a3=7ff314d46e90 items=0 ppid=1748 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.076000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:16:22.077788 systemd-networkd[1209]: docker0: Link UP Jun 25 16:16:22.081000 audit[1837]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.081000 audit[1837]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff22580720 a2=0 a3=7fde4f012e90 items=0 ppid=1748 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.081000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:16:22.082000 audit[1838]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:22.082000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd91e7e720 a2=0 a3=7f9528528e90 items=0 ppid=1748 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:22.082000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:16:22.083112 dockerd[1748]: time="2024-06-25T16:16:22.083095858Z" level=info msg="Loading containers: done." Jun 25 16:16:22.143279 dockerd[1748]: time="2024-06-25T16:16:22.143248727Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:16:22.143390 dockerd[1748]: time="2024-06-25T16:16:22.143377106Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:16:22.143439 dockerd[1748]: time="2024-06-25T16:16:22.143428208Z" level=info msg="Daemon has completed initialization" Jun 25 16:16:22.160657 dockerd[1748]: time="2024-06-25T16:16:22.160613486Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:16:22.160999 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:16:22.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:23.357436 containerd[1445]: time="2024-06-25T16:16:23.357408940Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:16:23.979832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599469241.mount: Deactivated successfully. Jun 25 16:16:24.663045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 16:16:24.663201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:24.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:24.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:24.669842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:24.737013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:24.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:24.785659 kubelet[1939]: E0625 16:16:24.785630 1939 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:16:24.787038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:16:24.787169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:16:24.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:16:25.536508 update_engine[1430]: I0625 16:16:25.536463 1430 update_attempter.cc:509] Updating boot flags... Jun 25 16:16:25.599734 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1956) Jun 25 16:16:25.669648 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1955) Jun 25 16:16:26.194439 containerd[1445]: time="2024-06-25T16:16:26.194409294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:26.195450 containerd[1445]: time="2024-06-25T16:16:26.195420331Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 16:16:26.195669 containerd[1445]: time="2024-06-25T16:16:26.195652679Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:26.197250 containerd[1445]: time="2024-06-25T16:16:26.197233185Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:26.198809 containerd[1445]: time="2024-06-25T16:16:26.198790909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:26.200407 containerd[1445]: time="2024-06-25T16:16:26.200384295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.84294641s" Jun 25 16:16:26.200490 containerd[1445]: time="2024-06-25T16:16:26.200474617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:16:26.214349 containerd[1445]: time="2024-06-25T16:16:26.214309551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:16:28.634134 containerd[1445]: time="2024-06-25T16:16:28.634097123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:28.639902 containerd[1445]: time="2024-06-25T16:16:28.639877833Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 16:16:28.642891 containerd[1445]: time="2024-06-25T16:16:28.642877587Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:28.656223 containerd[1445]: time="2024-06-25T16:16:28.656201893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:28.664168 containerd[1445]: time="2024-06-25T16:16:28.664146059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:28.664984 containerd[1445]: time="2024-06-25T16:16:28.664963673Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.450508511s" Jun 25 16:16:28.665058 containerd[1445]: time="2024-06-25T16:16:28.665043473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:16:28.684700 containerd[1445]: time="2024-06-25T16:16:28.684668785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:16:30.743606 containerd[1445]: time="2024-06-25T16:16:30.743575929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:30.755970 containerd[1445]: time="2024-06-25T16:16:30.755936444Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 16:16:30.770182 containerd[1445]: time="2024-06-25T16:16:30.770164269Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:30.791568 containerd[1445]: time="2024-06-25T16:16:30.791551732Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:30.802851 containerd[1445]: time="2024-06-25T16:16:30.802829976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:30.803739 containerd[1445]: time="2024-06-25T16:16:30.803711191Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 2.119011935s" Jun 25 16:16:30.803876 containerd[1445]: time="2024-06-25T16:16:30.803740303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:16:30.819515 containerd[1445]: time="2024-06-25T16:16:30.819473597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:16:32.454753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781952777.mount: Deactivated successfully. Jun 25 16:16:32.883123 containerd[1445]: time="2024-06-25T16:16:32.883028747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:32.888200 containerd[1445]: time="2024-06-25T16:16:32.888171458Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 16:16:32.894987 containerd[1445]: time="2024-06-25T16:16:32.894966832Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:32.900224 containerd[1445]: time="2024-06-25T16:16:32.900208908Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:32.908181 containerd[1445]: time="2024-06-25T16:16:32.908164328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:32.909232 containerd[1445]: time="2024-06-25T16:16:32.909208171Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.089672592s" Jun 25 16:16:32.909313 containerd[1445]: time="2024-06-25T16:16:32.909297823Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:16:32.927684 containerd[1445]: time="2024-06-25T16:16:32.927660616Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:16:33.497031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796303884.mount: Deactivated successfully. Jun 25 16:16:33.499672 containerd[1445]: time="2024-06-25T16:16:33.499648309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:33.500374 containerd[1445]: time="2024-06-25T16:16:33.500339821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:16:33.500673 containerd[1445]: time="2024-06-25T16:16:33.500657734Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:33.505099 containerd[1445]: time="2024-06-25T16:16:33.505075655Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:33.506298 containerd[1445]: time="2024-06-25T16:16:33.506281223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:33.506967 containerd[1445]: time="2024-06-25T16:16:33.506950211Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 579.146125ms" Jun 25 16:16:33.507038 containerd[1445]: time="2024-06-25T16:16:33.507026192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:16:33.520577 containerd[1445]: time="2024-06-25T16:16:33.520552352Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:16:34.048987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130511625.mount: Deactivated successfully. Jun 25 16:16:34.912941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 16:16:34.913068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:34.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:34.924955 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:16:34.924994 kernel: audit: type=1130 audit(1719332194.912:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:34.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:34.926640 kernel: audit: type=1131 audit(1719332194.912:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:34.927942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:35.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:35.376958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:35.379702 kernel: audit: type=1130 audit(1719332195.376:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:35.464283 kubelet[2061]: E0625 16:16:35.464249 2061 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:16:35.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:16:35.465485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:16:35.465573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:16:35.467631 kernel: audit: type=1131 audit(1719332195.465:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:16:36.801169 containerd[1445]: time="2024-06-25T16:16:36.801131456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:36.806172 containerd[1445]: time="2024-06-25T16:16:36.806152894Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:16:36.816566 containerd[1445]: time="2024-06-25T16:16:36.816548204Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:36.825118 containerd[1445]: time="2024-06-25T16:16:36.825102518Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:36.832395 containerd[1445]: time="2024-06-25T16:16:36.832376920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:36.832890 containerd[1445]: time="2024-06-25T16:16:36.832873003Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.312178095s" Jun 25 16:16:36.832949 containerd[1445]: time="2024-06-25T16:16:36.832937809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:16:36.846155 containerd[1445]: time="2024-06-25T16:16:36.846130367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:16:37.478922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129686069.mount: Deactivated successfully. Jun 25 16:16:37.940024 containerd[1445]: time="2024-06-25T16:16:37.939984144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:37.940401 containerd[1445]: time="2024-06-25T16:16:37.940373354Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 16:16:37.940852 containerd[1445]: time="2024-06-25T16:16:37.940841141Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:37.941762 containerd[1445]: time="2024-06-25T16:16:37.941751603Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:37.944217 containerd[1445]: time="2024-06-25T16:16:37.944202928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:16:37.945350 containerd[1445]: time="2024-06-25T16:16:37.945332042Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.099062904s" Jun 25 16:16:37.945426 containerd[1445]: time="2024-06-25T16:16:37.945409436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:16:40.424223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:40.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:40.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:40.427678 kernel: audit: type=1130 audit(1719332200.424:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:40.427719 kernel: audit: type=1131 audit(1719332200.424:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:40.428823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:40.441151 systemd[1]: Reloading. Jun 25 16:16:40.558019 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:16:40.569719 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:16:40.632917 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:16:40.632976 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:16:40.633261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:40.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:16:40.635634 kernel: audit: type=1130 audit(1719332200.632:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:16:40.642153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:40.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:40.819671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:40.821906 kernel: audit: type=1130 audit(1719332200.819:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:40.891839 kubelet[2238]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:16:40.892102 kubelet[2238]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:16:40.892135 kubelet[2238]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:16:40.892219 kubelet[2238]: I0625 16:16:40.892199 2238 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:16:41.130918 kubelet[2238]: I0625 16:16:41.130869 2238 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:16:41.131002 kubelet[2238]: I0625 16:16:41.130995 2238 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:16:41.131166 kubelet[2238]: I0625 16:16:41.131159 2238 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:16:41.252003 kubelet[2238]: I0625 16:16:41.251591 2238 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:16:41.261070 kubelet[2238]: E0625 16:16:41.260978 2238 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.268047 kubelet[2238]: I0625 16:16:41.268034 2238 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:16:41.268324 kubelet[2238]: I0625 16:16:41.268318 2238 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:16:41.268467 kubelet[2238]: I0625 16:16:41.268458 2238 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:16:41.268838 kubelet[2238]: I0625 16:16:41.268830 2238 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:16:41.268882 kubelet[2238]: I0625 16:16:41.268877 2238 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:16:41.269523 kubelet[2238]: I0625 16:16:41.269516 2238 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:16:41.270787 kubelet[2238]: I0625 16:16:41.270781 2238 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:16:41.270835 kubelet[2238]: I0625 16:16:41.270829 2238 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:16:41.270887 kubelet[2238]: I0625 16:16:41.270881 2238 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:16:41.270925 kubelet[2238]: I0625 16:16:41.270920 2238 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:16:41.271118 kubelet[2238]: W0625 16:16:41.271090 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.271150 kubelet[2238]: E0625 16:16:41.271125 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.271996 kubelet[2238]: I0625 16:16:41.271989 2238 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:16:41.274115 kubelet[2238]: W0625 16:16:41.274094 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.274115 kubelet[2238]: E0625 16:16:41.274116 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.275144 kubelet[2238]: W0625 16:16:41.275136 2238 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:16:41.275460 kubelet[2238]: I0625 16:16:41.275452 2238 server.go:1232] "Started kubelet" Jun 25 16:16:41.276119 kubelet[2238]: I0625 16:16:41.276111 2238 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:16:41.278488 kubelet[2238]: E0625 16:16:41.278476 2238 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:16:41.278525 kubelet[2238]: E0625 16:16:41.278492 2238 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:16:41.278564 kubelet[2238]: E0625 16:16:41.278523 2238 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4b8713e9bd5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 16, 41, 275440474, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 16, 41, 275440474, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://139.178.70.105:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.105:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:16:41.278000 audit[2250]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.278000 audit[2250]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd08df7030 a2=0 a3=7f5a98fbae90 items=0 ppid=2238 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.280452 kubelet[2238]: I0625 16:16:41.280427 2238 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:16:41.280809 kubelet[2238]: I0625 16:16:41.280795 2238 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:16:41.280971 kubelet[2238]: I0625 16:16:41.280965 2238 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:16:41.282444 kernel: audit: type=1325 audit(1719332201.278:210): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.282476 kernel: audit: type=1300 audit(1719332201.278:210): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd08df7030 a2=0 a3=7f5a98fbae90 items=0 ppid=2238 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.282492 kernel: audit: type=1327 audit(1719332201.278:210): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:16:41.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:16:41.283351 kubelet[2238]: I0625 16:16:41.283343 2238 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:16:41.278000 audit[2251]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.283742 kernel: audit: type=1325 audit(1719332201.278:211): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.283900 kubelet[2238]: E0625 16:16:41.283893 2238 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:16:41.283951 kubelet[2238]: I0625 16:16:41.283946 2238 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:16:41.284023 kubelet[2238]: I0625 16:16:41.284018 2238 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:16:41.284081 kubelet[2238]: I0625 16:16:41.284077 2238 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:16:41.284306 kubelet[2238]: W0625 16:16:41.284288 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.284354 kubelet[2238]: E0625 16:16:41.284348 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.278000 audit[2251]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb9f2ea70 a2=0 a3=7f2111c03e90 items=0 ppid=2238 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.285910 kubelet[2238]: E0625 16:16:41.285903 2238 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Jun 25 16:16:41.286973 kernel: audit: type=1300 audit(1719332201.278:211): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb9f2ea70 a2=0 a3=7f2111c03e90 items=0 ppid=2238 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.287071 kernel: audit: type=1327 audit(1719332201.278:211): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:16:41.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:16:41.288000 audit[2253]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.288000 audit[2253]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff49440530 a2=0 a3=7fcce3851e90 items=0 ppid=2238 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:16:41.290000 audit[2255]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.290000 audit[2255]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc81217530 a2=0 a3=7f52ba7f8e90 items=0 ppid=2238 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:16:41.313079 kubelet[2238]: I0625 16:16:41.313062 2238 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:16:41.313079 kubelet[2238]: I0625 16:16:41.313076 2238 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:16:41.313172 kubelet[2238]: I0625 16:16:41.313089 2238 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:16:41.314517 kubelet[2238]: I0625 16:16:41.314504 2238 policy_none.go:49] "None policy: Start" Jun 25 16:16:41.315313 kubelet[2238]: I0625 16:16:41.314949 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:16:41.315313 kubelet[2238]: I0625 16:16:41.315151 2238 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:16:41.315313 kubelet[2238]: I0625 16:16:41.315167 2238 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:16:41.314000 audit[2260]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.314000 audit[2260]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd6d230bd0 a2=0 a3=7f9a87d12e90 items=0 ppid=2238 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:16:41.315000 audit[2261]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:16:41.315000 audit[2261]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc3483fc00 a2=0 a3=7fa55e319e90 items=0 ppid=2238 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:16:41.316735 kubelet[2238]: I0625 16:16:41.316728 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:16:41.316775 kubelet[2238]: I0625 16:16:41.316770 2238 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:16:41.316816 kubelet[2238]: I0625 16:16:41.316811 2238 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:16:41.316870 kubelet[2238]: E0625 16:16:41.316864 2238 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:16:41.316000 audit[2263]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.316000 audit[2263]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddd6b9e40 a2=0 a3=7efe43a20e90 items=0 ppid=2238 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:16:41.317000 audit[2264]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:16:41.317000 audit[2264]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce3ee08d0 a2=0 a3=7fbba0c2fe90 items=0 ppid=2238 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:16:41.318000 audit[2265]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.318000 audit[2265]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0d460910 a2=0 a3=7f797231ee90 items=0 ppid=2238 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:16:41.318000 audit[2266]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:16:41.318000 audit[2266]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe441f31f0 a2=0 a3=7f6978f0de90 items=0 ppid=2238 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:16:41.319000 audit[2267]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:16:41.319000 audit[2267]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe1579f8c0 a2=0 a3=7fafd7f83e90 items=0 ppid=2238 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:16:41.320879 kubelet[2238]: I0625 16:16:41.320872 2238 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:16:41.321043 kubelet[2238]: I0625 16:16:41.321037 2238 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:16:41.320000 audit[2268]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:16:41.320000 audit[2268]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd26846380 a2=0 a3=7fa4e8b95e90 items=0 ppid=2238 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:16:41.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:16:41.322718 kubelet[2238]: W0625 16:16:41.322528 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.322718 kubelet[2238]: E0625 16:16:41.322547 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:41.323092 kubelet[2238]: E0625 16:16:41.323078 2238 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:16:41.385690 kubelet[2238]: I0625 16:16:41.385633 2238 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:16:41.386165 kubelet[2238]: E0625 16:16:41.386158 2238 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:16:41.417641 kubelet[2238]: I0625 16:16:41.417611 2238 topology_manager.go:215] "Topology Admit Handler" podUID="41fdcc238a1adae31bcb8dc35d7df311" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:16:41.418538 kubelet[2238]: I0625 16:16:41.418528 2238 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:16:41.420818 kubelet[2238]: I0625 16:16:41.420808 2238 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:16:41.486993 kubelet[2238]: E0625 16:16:41.486977 2238 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Jun 25 16:16:41.487386 kubelet[2238]: I0625 16:16:41.487369 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41fdcc238a1adae31bcb8dc35d7df311-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"41fdcc238a1adae31bcb8dc35d7df311\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:16:41.487428 kubelet[2238]: I0625 16:16:41.487393 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:41.487428 kubelet[2238]: I0625 16:16:41.487407 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:41.487428 kubelet[2238]: I0625 16:16:41.487418 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41fdcc238a1adae31bcb8dc35d7df311-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"41fdcc238a1adae31bcb8dc35d7df311\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:16:41.487484 kubelet[2238]: I0625 16:16:41.487430 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:41.487484 kubelet[2238]: I0625 16:16:41.487440 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:41.487484 kubelet[2238]: I0625 16:16:41.487451 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:41.487484 kubelet[2238]: I0625 16:16:41.487462 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:16:41.487484 kubelet[2238]: I0625 16:16:41.487473 2238 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41fdcc238a1adae31bcb8dc35d7df311-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"41fdcc238a1adae31bcb8dc35d7df311\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:16:41.587160 kubelet[2238]: I0625 16:16:41.587140 2238 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:16:41.587364 kubelet[2238]: E0625 16:16:41.587350 2238 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:16:41.721927 containerd[1445]: time="2024-06-25T16:16:41.721818553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:41fdcc238a1adae31bcb8dc35d7df311,Namespace:kube-system,Attempt:0,}" Jun 25 16:16:41.724213 containerd[1445]: time="2024-06-25T16:16:41.724154596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 16:16:41.727178 containerd[1445]: time="2024-06-25T16:16:41.727156450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 16:16:41.887481 kubelet[2238]: E0625 16:16:41.887458 2238 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Jun 25 16:16:41.988485 kubelet[2238]: I0625 16:16:41.988431 2238 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:16:41.988754 kubelet[2238]: E0625 16:16:41.988726 2238 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:16:42.073483 kubelet[2238]: W0625 16:16:42.073442 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.073483 kubelet[2238]: E0625 16:16:42.073480 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.165852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778097856.mount: Deactivated successfully. Jun 25 16:16:42.169310 containerd[1445]: time="2024-06-25T16:16:42.169286962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.170529 containerd[1445]: time="2024-06-25T16:16:42.170276557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:16:42.170786 containerd[1445]: time="2024-06-25T16:16:42.170684831Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.171434 containerd[1445]: time="2024-06-25T16:16:42.171407851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:16:42.173027 containerd[1445]: time="2024-06-25T16:16:42.172994645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:16:42.173087 containerd[1445]: time="2024-06-25T16:16:42.173060021Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.174844 containerd[1445]: time="2024-06-25T16:16:42.174823644Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.176892 containerd[1445]: time="2024-06-25T16:16:42.176874802Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.177430 containerd[1445]: time="2024-06-25T16:16:42.177418516Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.178084 containerd[1445]: time="2024-06-25T16:16:42.178072391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.178666 containerd[1445]: time="2024-06-25T16:16:42.178655014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.179483 containerd[1445]: time="2024-06-25T16:16:42.179452816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.605409ms" Jun 25 16:16:42.180211 containerd[1445]: time="2024-06-25T16:16:42.180196356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.983498ms" Jun 25 16:16:42.182159 containerd[1445]: time="2024-06-25T16:16:42.182130205Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.182852 containerd[1445]: time="2024-06-25T16:16:42.182834196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.183696 containerd[1445]: time="2024-06-25T16:16:42.183671495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.184600 containerd[1445]: time="2024-06-25T16:16:42.184378947Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:16:42.186745 containerd[1445]: time="2024-06-25T16:16:42.186728843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 462.520069ms" Jun 25 16:16:42.304688 kubelet[2238]: W0625 16:16:42.304122 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.304688 kubelet[2238]: E0625 16:16:42.304146 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.327868 containerd[1445]: time="2024-06-25T16:16:42.327736419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:16:42.327868 containerd[1445]: time="2024-06-25T16:16:42.327772215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:16:42.327868 containerd[1445]: time="2024-06-25T16:16:42.327784074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:16:42.327868 containerd[1445]: time="2024-06-25T16:16:42.327792588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:16:42.328384 containerd[1445]: time="2024-06-25T16:16:42.328326713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:16:42.328641 containerd[1445]: time="2024-06-25T16:16:42.328359742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:16:42.328641 containerd[1445]: time="2024-06-25T16:16:42.328550667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:16:42.328641 containerd[1445]: time="2024-06-25T16:16:42.328561293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:16:42.328951 containerd[1445]: time="2024-06-25T16:16:42.328901910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:16:42.329123 containerd[1445]: time="2024-06-25T16:16:42.328926732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:16:42.329168 containerd[1445]: time="2024-06-25T16:16:42.329098002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:16:42.329168 containerd[1445]: time="2024-06-25T16:16:42.329123313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:16:42.335586 kubelet[2238]: W0625 16:16:42.335529 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.335586 kubelet[2238]: E0625 16:16:42.335565 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.384855 containerd[1445]: time="2024-06-25T16:16:42.384832460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"f93d552c9ce74ad86e92c941dfb4963c6e69a5c9c5e40c4b4ebc59b9f8c591e8\"" Jun 25 16:16:42.389014 containerd[1445]: time="2024-06-25T16:16:42.388979191Z" level=info msg="CreateContainer within sandbox \"f93d552c9ce74ad86e92c941dfb4963c6e69a5c9c5e40c4b4ebc59b9f8c591e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:16:42.400221 containerd[1445]: time="2024-06-25T16:16:42.400196171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:41fdcc238a1adae31bcb8dc35d7df311,Namespace:kube-system,Attempt:0,} returns sandbox id \"b58ec7aefdc18c560e183c64ea5686c4825026a1297157a2854af21d493193bb\"" Jun 25 16:16:42.402262 containerd[1445]: time="2024-06-25T16:16:42.402247532Z" level=info msg="CreateContainer within sandbox \"b58ec7aefdc18c560e183c64ea5686c4825026a1297157a2854af21d493193bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:16:42.404791 containerd[1445]: time="2024-06-25T16:16:42.404764325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcec943594306d64cbc6c2a6af3d8f14fefbf7cdb45ee5733bf7cdd907f1491a\"" Jun 25 16:16:42.405916 containerd[1445]: time="2024-06-25T16:16:42.405899208Z" level=info msg="CreateContainer within sandbox \"bcec943594306d64cbc6c2a6af3d8f14fefbf7cdb45ee5733bf7cdd907f1491a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:16:42.474459 kubelet[2238]: W0625 16:16:42.474395 2238 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.474459 kubelet[2238]: E0625 16:16:42.474444 2238 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:42.688564 kubelet[2238]: E0625 16:16:42.688543 2238 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Jun 25 16:16:42.789694 kubelet[2238]: I0625 16:16:42.789676 2238 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:16:42.789873 kubelet[2238]: E0625 16:16:42.789862 2238 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 25 16:16:42.877217 containerd[1445]: time="2024-06-25T16:16:42.877179715Z" level=info msg="CreateContainer within sandbox \"f93d552c9ce74ad86e92c941dfb4963c6e69a5c9c5e40c4b4ebc59b9f8c591e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"006ccd3ae00819a07d84575cb61c5cebf691e07f3dd0dc4551f2e272e2bb1203\"" Jun 25 16:16:42.877674 containerd[1445]: time="2024-06-25T16:16:42.877660246Z" level=info msg="StartContainer for \"006ccd3ae00819a07d84575cb61c5cebf691e07f3dd0dc4551f2e272e2bb1203\"" Jun 25 16:16:42.925169 containerd[1445]: time="2024-06-25T16:16:42.925129200Z" level=info msg="CreateContainer within sandbox \"bcec943594306d64cbc6c2a6af3d8f14fefbf7cdb45ee5733bf7cdd907f1491a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4fd6e10e7d8dd4cb19d9b0a95311626a313a7f00709994387f33bc645eee96e3\"" Jun 25 16:16:42.925404 containerd[1445]: time="2024-06-25T16:16:42.925389273Z" level=info msg="StartContainer for \"4fd6e10e7d8dd4cb19d9b0a95311626a313a7f00709994387f33bc645eee96e3\"" Jun 25 16:16:42.925712 containerd[1445]: time="2024-06-25T16:16:42.925697506Z" level=info msg="CreateContainer within sandbox \"b58ec7aefdc18c560e183c64ea5686c4825026a1297157a2854af21d493193bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6eb7304617f5449f920cc8c9a9130e0f9c6216fd45d77ddf434bf588a5d350c0\"" Jun 25 16:16:42.925921 containerd[1445]: time="2024-06-25T16:16:42.925909347Z" level=info msg="StartContainer for \"6eb7304617f5449f920cc8c9a9130e0f9c6216fd45d77ddf434bf588a5d350c0\"" Jun 25 16:16:42.939903 containerd[1445]: time="2024-06-25T16:16:42.939842515Z" level=info msg="StartContainer for \"006ccd3ae00819a07d84575cb61c5cebf691e07f3dd0dc4551f2e272e2bb1203\" returns successfully" Jun 25 16:16:42.998866 containerd[1445]: time="2024-06-25T16:16:42.998752911Z" level=info msg="StartContainer for \"6eb7304617f5449f920cc8c9a9130e0f9c6216fd45d77ddf434bf588a5d350c0\" returns successfully" Jun 25 16:16:43.017947 containerd[1445]: time="2024-06-25T16:16:43.017743888Z" level=info msg="StartContainer for \"4fd6e10e7d8dd4cb19d9b0a95311626a313a7f00709994387f33bc645eee96e3\" returns successfully" Jun 25 16:16:43.403549 kubelet[2238]: E0625 16:16:43.403480 2238 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.105:6443: connect: connection refused Jun 25 16:16:44.390562 kubelet[2238]: I0625 16:16:44.390543 2238 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:16:44.478821 kubelet[2238]: E0625 16:16:44.478794 2238 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 16:16:44.566889 kubelet[2238]: I0625 16:16:44.566788 2238 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:16:45.274245 kubelet[2238]: I0625 16:16:45.274220 2238 apiserver.go:52] "Watching apiserver" Jun 25 16:16:45.284644 kubelet[2238]: I0625 16:16:45.284613 2238 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:16:47.045768 systemd[1]: Reloading. Jun 25 16:16:47.149388 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 25 16:16:47.160792 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:16:47.213718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:47.230866 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:16:47.231109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:47.233612 kernel: kauditd_printk_skb: 30 callbacks suppressed Jun 25 16:16:47.233666 kernel: audit: type=1131 audit(1719332207.230:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:47.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:47.237058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:16:47.869229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:16:47.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:47.872631 kernel: audit: type=1130 audit(1719332207.869:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:47.960949 kubelet[2601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:16:47.961190 kubelet[2601]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:16:47.961221 kubelet[2601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:16:47.961310 kubelet[2601]: I0625 16:16:47.961288 2601 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:16:47.963903 kubelet[2601]: I0625 16:16:47.963893 2601 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:16:47.964023 kubelet[2601]: I0625 16:16:47.964017 2601 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:16:47.964198 kubelet[2601]: I0625 16:16:47.964191 2601 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:16:47.965207 kubelet[2601]: I0625 16:16:47.965199 2601 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:16:47.966068 kubelet[2601]: I0625 16:16:47.966058 2601 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:16:47.971901 kubelet[2601]: I0625 16:16:47.971886 2601 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:16:47.972129 kubelet[2601]: I0625 16:16:47.972119 2601 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:16:47.972240 kubelet[2601]: I0625 16:16:47.972228 2601 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:16:47.972342 kubelet[2601]: I0625 16:16:47.972243 2601 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:16:47.972342 kubelet[2601]: I0625 16:16:47.972249 2601 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:16:47.972342 kubelet[2601]: I0625 16:16:47.972274 2601 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:16:47.973817 kubelet[2601]: I0625 16:16:47.973807 2601 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:16:47.973851 kubelet[2601]: I0625 16:16:47.973821 2601 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:16:47.973851 kubelet[2601]: I0625 16:16:47.973835 2601 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:16:47.973851 kubelet[2601]: I0625 16:16:47.973844 2601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:16:47.976891 kubelet[2601]: I0625 16:16:47.976880 2601 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:16:47.979038 kubelet[2601]: I0625 16:16:47.977529 2601 server.go:1232] "Started kubelet" Jun 25 16:16:47.979038 kubelet[2601]: I0625 16:16:47.978645 2601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:16:47.983249 kubelet[2601]: I0625 16:16:47.983237 2601 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:16:47.987200 kubelet[2601]: I0625 16:16:47.987187 2601 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:16:47.987409 kubelet[2601]: I0625 16:16:47.987402 2601 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:16:47.990046 kubelet[2601]: I0625 16:16:47.990035 2601 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:16:47.992266 kubelet[2601]: I0625 16:16:47.992255 2601 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:16:47.993036 kubelet[2601]: I0625 16:16:47.993028 2601 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:16:47.993150 kubelet[2601]: I0625 16:16:47.993144 2601 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:16:48.000490 kubelet[2601]: I0625 16:16:48.000475 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:16:48.001328 kubelet[2601]: I0625 16:16:48.001319 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:16:48.001401 kubelet[2601]: I0625 16:16:48.001396 2601 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:16:48.001449 kubelet[2601]: I0625 16:16:48.001444 2601 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:16:48.001523 kubelet[2601]: E0625 16:16:48.001517 2601 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:16:48.007110 kubelet[2601]: E0625 16:16:48.007089 2601 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:16:48.007110 kubelet[2601]: E0625 16:16:48.007108 2601 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:16:48.092387 kubelet[2601]: I0625 16:16:48.092374 2601 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:16:48.097742 kubelet[2601]: I0625 16:16:48.097725 2601 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 16:16:48.097872 kubelet[2601]: I0625 16:16:48.097866 2601 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:16:48.103107 kubelet[2601]: E0625 16:16:48.103083 2601 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:16:48.108842 kubelet[2601]: I0625 16:16:48.108821 2601 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:16:48.108979 kubelet[2601]: I0625 16:16:48.108973 2601 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:16:48.109024 kubelet[2601]: I0625 16:16:48.109019 2601 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:16:48.109864 kubelet[2601]: I0625 16:16:48.109856 2601 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:16:48.109922 kubelet[2601]: I0625 16:16:48.109916 2601 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:16:48.109969 kubelet[2601]: I0625 16:16:48.109962 2601 policy_none.go:49] "None policy: Start" Jun 25 16:16:48.111075 kubelet[2601]: I0625 16:16:48.111067 2601 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:16:48.111133 kubelet[2601]: I0625 16:16:48.111127 2601 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:16:48.111284 kubelet[2601]: I0625 16:16:48.111276 2601 state_mem.go:75] "Updated machine memory state" Jun 25 16:16:48.111980 kubelet[2601]: I0625 16:16:48.111973 2601 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:16:48.113770 kubelet[2601]: I0625 16:16:48.113760 2601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:16:48.303780 kubelet[2601]: I0625 16:16:48.303757 2601 topology_manager.go:215] "Topology Admit Handler" podUID="41fdcc238a1adae31bcb8dc35d7df311" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:16:48.303977 kubelet[2601]: I0625 16:16:48.303968 2601 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:16:48.304055 kubelet[2601]: I0625 16:16:48.304047 2601 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:16:48.313536 kubelet[2601]: E0625 16:16:48.313516 2601 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:16:48.394650 kubelet[2601]: I0625 16:16:48.394604 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41fdcc238a1adae31bcb8dc35d7df311-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"41fdcc238a1adae31bcb8dc35d7df311\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:16:48.394650 kubelet[2601]: I0625 16:16:48.394654 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41fdcc238a1adae31bcb8dc35d7df311-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"41fdcc238a1adae31bcb8dc35d7df311\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:16:48.394818 kubelet[2601]: I0625 16:16:48.394670 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:48.394818 kubelet[2601]: I0625 16:16:48.394683 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:48.394818 kubelet[2601]: I0625 16:16:48.394697 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:48.394818 kubelet[2601]: I0625 16:16:48.394708 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:16:48.394818 kubelet[2601]: I0625 16:16:48.394720 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41fdcc238a1adae31bcb8dc35d7df311-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"41fdcc238a1adae31bcb8dc35d7df311\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:16:48.394977 kubelet[2601]: I0625 16:16:48.394731 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:48.394977 kubelet[2601]: I0625 16:16:48.394743 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:16:48.974184 kubelet[2601]: I0625 16:16:48.974157 2601 apiserver.go:52] "Watching apiserver" Jun 25 16:16:48.993592 kubelet[2601]: I0625 16:16:48.993567 2601 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:16:49.078860 kubelet[2601]: I0625 16:16:49.078840 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.078783513 podCreationTimestamp="2024-06-25 16:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:16:49.069748533 +0000 UTC m=+1.186465395" watchObservedRunningTime="2024-06-25 16:16:49.078783513 +0000 UTC m=+1.195500363" Jun 25 16:16:49.087973 kubelet[2601]: I0625 16:16:49.087955 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.087930727 podCreationTimestamp="2024-06-25 16:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:16:49.079360679 +0000 UTC m=+1.196077531" watchObservedRunningTime="2024-06-25 16:16:49.087930727 +0000 UTC m=+1.204647579" Jun 25 16:16:49.107009 kubelet[2601]: I0625 16:16:49.106990 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.10696546 podCreationTimestamp="2024-06-25 16:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:16:49.090528397 +0000 UTC m=+1.207245250" watchObservedRunningTime="2024-06-25 16:16:49.10696546 +0000 UTC m=+1.223682311" Jun 25 16:16:52.777897 sudo[1739]: pam_unix(sudo:session): session closed for user root Jun 25 16:16:52.777000 audit[1739]: USER_END pid=1739 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:52.777000 audit[1739]: CRED_DISP pid=1739 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:52.781748 kernel: audit: type=1106 audit(1719332212.777:224): pid=1739 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:52.781793 kernel: audit: type=1104 audit(1719332212.777:225): pid=1739 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:16:52.782947 sshd[1733]: pam_unix(sshd:session): session closed for user core Jun 25 16:16:52.783000 audit[1733]: USER_END pid=1733 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:52.785232 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:16:52.783000 audit[1733]: CRED_DISP pid=1733 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:52.786468 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:35348.service: Deactivated successfully. Jun 25 16:16:52.787063 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:16:52.788195 kernel: audit: type=1106 audit(1719332212.783:226): pid=1733 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:52.788228 kernel: audit: type=1104 audit(1719332212.783:227): pid=1733 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:16:52.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:35348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:16:52.788553 systemd-logind[1426]: Removed session 9. Jun 25 16:16:52.790220 kernel: audit: type=1131 audit(1719332212.786:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:35348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:17:01.368022 kubelet[2601]: I0625 16:17:01.368003 2601 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:17:01.372739 containerd[1445]: time="2024-06-25T16:17:01.372711379Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:17:01.373042 kubelet[2601]: I0625 16:17:01.373032 2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:17:01.966818 kubelet[2601]: I0625 16:17:01.966797 2601 topology_manager.go:215] "Topology Admit Handler" podUID="ed86e673-1565-4289-8c79-43a15c639ba8" podNamespace="kube-system" podName="kube-proxy-9mx4l" Jun 25 16:17:02.086207 kubelet[2601]: I0625 16:17:02.086190 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed86e673-1565-4289-8c79-43a15c639ba8-kube-proxy\") pod \"kube-proxy-9mx4l\" (UID: \"ed86e673-1565-4289-8c79-43a15c639ba8\") " pod="kube-system/kube-proxy-9mx4l" Jun 25 16:17:02.086207 kubelet[2601]: I0625 16:17:02.086213 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed86e673-1565-4289-8c79-43a15c639ba8-xtables-lock\") pod \"kube-proxy-9mx4l\" (UID: \"ed86e673-1565-4289-8c79-43a15c639ba8\") " pod="kube-system/kube-proxy-9mx4l" Jun 25 16:17:02.086331 kubelet[2601]: I0625 16:17:02.086226 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed86e673-1565-4289-8c79-43a15c639ba8-lib-modules\") pod \"kube-proxy-9mx4l\" (UID: \"ed86e673-1565-4289-8c79-43a15c639ba8\") " pod="kube-system/kube-proxy-9mx4l" Jun 25 16:17:02.086331 kubelet[2601]: I0625 16:17:02.086239 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kd2\" (UniqueName: \"kubernetes.io/projected/ed86e673-1565-4289-8c79-43a15c639ba8-kube-api-access-27kd2\") pod \"kube-proxy-9mx4l\" (UID: \"ed86e673-1565-4289-8c79-43a15c639ba8\") " pod="kube-system/kube-proxy-9mx4l" Jun 25 16:17:02.271495 kubelet[2601]: I0625 16:17:02.271434 2601 topology_manager.go:215] "Topology Admit Handler" podUID="23b73424-6195-4c44-b5da-b4dcc8ae28ce" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-8c86m" Jun 25 16:17:02.274501 containerd[1445]: time="2024-06-25T16:17:02.274121854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mx4l,Uid:ed86e673-1565-4289-8c79-43a15c639ba8,Namespace:kube-system,Attempt:0,}" Jun 25 16:17:02.297671 containerd[1445]: time="2024-06-25T16:17:02.297609609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:02.297811 containerd[1445]: time="2024-06-25T16:17:02.297793251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:02.297885 containerd[1445]: time="2024-06-25T16:17:02.297870292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:02.297947 containerd[1445]: time="2024-06-25T16:17:02.297934762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:02.322723 containerd[1445]: time="2024-06-25T16:17:02.322701632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mx4l,Uid:ed86e673-1565-4289-8c79-43a15c639ba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c45764a9594f3f097be6b17d170f54f1a167af86ea3670d49ab2cf00032f92\"" Jun 25 16:17:02.324701 containerd[1445]: time="2024-06-25T16:17:02.324687729Z" level=info msg="CreateContainer within sandbox \"53c45764a9594f3f097be6b17d170f54f1a167af86ea3670d49ab2cf00032f92\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:17:02.387360 kubelet[2601]: I0625 16:17:02.387241 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23b73424-6195-4c44-b5da-b4dcc8ae28ce-var-lib-calico\") pod \"tigera-operator-76c4974c85-8c86m\" (UID: \"23b73424-6195-4c44-b5da-b4dcc8ae28ce\") " pod="tigera-operator/tigera-operator-76c4974c85-8c86m" Jun 25 16:17:02.387360 kubelet[2601]: I0625 16:17:02.387268 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grc9r\" (UniqueName: \"kubernetes.io/projected/23b73424-6195-4c44-b5da-b4dcc8ae28ce-kube-api-access-grc9r\") pod \"tigera-operator-76c4974c85-8c86m\" (UID: \"23b73424-6195-4c44-b5da-b4dcc8ae28ce\") " pod="tigera-operator/tigera-operator-76c4974c85-8c86m" Jun 25 16:17:02.399811 containerd[1445]: time="2024-06-25T16:17:02.399786895Z" level=info msg="CreateContainer within sandbox \"53c45764a9594f3f097be6b17d170f54f1a167af86ea3670d49ab2cf00032f92\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"676634927f5bb0e802b2392a503d613d29270caadbb765ede600a79d97833cb7\"" Jun 25 16:17:02.400801 containerd[1445]: time="2024-06-25T16:17:02.400702732Z" level=info msg="StartContainer for \"676634927f5bb0e802b2392a503d613d29270caadbb765ede600a79d97833cb7\"" Jun 25 16:17:02.433469 containerd[1445]: time="2024-06-25T16:17:02.433436516Z" level=info msg="StartContainer for \"676634927f5bb0e802b2392a503d613d29270caadbb765ede600a79d97833cb7\" returns successfully" Jun 25 16:17:02.577183 containerd[1445]: time="2024-06-25T16:17:02.577120949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8c86m,Uid:23b73424-6195-4c44-b5da-b4dcc8ae28ce,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:17:02.597737 containerd[1445]: time="2024-06-25T16:17:02.597692000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:02.597834 containerd[1445]: time="2024-06-25T16:17:02.597740566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:02.597834 containerd[1445]: time="2024-06-25T16:17:02.597757649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:02.597834 containerd[1445]: time="2024-06-25T16:17:02.597771022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:02.638975 containerd[1445]: time="2024-06-25T16:17:02.638951918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8c86m,Uid:23b73424-6195-4c44-b5da-b4dcc8ae28ce,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cb5ec4c646e79830f65274162227c480602a5520ef37510a14417bb6313e1787\"" Jun 25 16:17:02.641065 containerd[1445]: time="2024-06-25T16:17:02.641031421Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:17:02.847787 kernel: audit: type=1325 audit(1719332222.844:229): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:02.847862 kernel: audit: type=1325 audit(1719332222.844:230): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.844000 audit[2817]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:02.844000 audit[2818]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.844000 audit[2818]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd91fa07c0 a2=0 a3=7ffd91fa07ac items=0 ppid=2737 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:17:02.852286 kernel: audit: type=1300 audit(1719332222.844:230): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd91fa07c0 a2=0 a3=7ffd91fa07ac items=0 ppid=2737 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.852313 kernel: audit: type=1327 audit(1719332222.844:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:17:02.852338 kernel: audit: type=1300 audit(1719332222.844:229): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3762c6f0 a2=0 a3=7fff3762c6dc items=0 ppid=2737 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.844000 audit[2817]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3762c6f0 a2=0 a3=7fff3762c6dc items=0 ppid=2737 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:17:02.854653 kernel: audit: type=1327 audit(1719332222.844:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:17:02.854000 audit[2821]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:02.857071 kernel: audit: type=1325 audit(1719332222.854:231): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:02.854000 audit[2821]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3ba43b70 a2=0 a3=7ffc3ba43b5c items=0 ppid=2737 pid=2821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.857708 kernel: audit: type=1300 audit(1719332222.854:231): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3ba43b70 a2=0 a3=7ffc3ba43b5c items=0 ppid=2737 pid=2821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:17:02.861002 kernel: audit: type=1327 audit(1719332222.854:231): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:17:02.861041 kernel: audit: type=1325 audit(1719332222.855:232): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:02.855000 audit[2822]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:02.855000 audit[2822]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffebdbdea0 a2=0 a3=7fffebdbde8c items=0 ppid=2737 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:17:02.856000 audit[2823]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.856000 audit[2823]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcddf2f4c0 a2=0 a3=7ffcddf2f4ac items=0 ppid=2737 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:17:02.857000 audit[2824]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.857000 audit[2824]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce6e30d10 a2=0 a3=7ffce6e30cfc items=0 ppid=2737 pid=2824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:17:02.958000 audit[2825]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.958000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff3325e720 a2=0 a3=7fff3325e70c items=0 ppid=2737 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:17:02.983000 audit[2827]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.983000 audit[2827]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd567b6a40 a2=0 a3=7ffd567b6a2c items=0 ppid=2737 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:17:02.989000 audit[2830]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.989000 audit[2830]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe86697d40 a2=0 a3=7ffe86697d2c items=0 ppid=2737 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:17:02.990000 audit[2831]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.990000 audit[2831]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf2ccbae0 a2=0 a3=7ffcf2ccbacc items=0 ppid=2737 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:17:02.992000 audit[2833]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.992000 audit[2833]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe434e6ec0 a2=0 a3=7ffe434e6eac items=0 ppid=2737 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.992000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:17:02.993000 audit[2834]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.993000 audit[2834]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb5dd66e0 a2=0 a3=7ffcb5dd66cc items=0 ppid=2737 pid=2834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:17:02.995000 audit[2836]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.995000 audit[2836]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb92b98d0 a2=0 a3=7fffb92b98bc items=0 ppid=2737 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.995000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:17:02.999000 audit[2839]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:02.999000 audit[2839]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffedf8ab3d0 a2=0 a3=7ffedf8ab3bc items=0 ppid=2737 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:02.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:17:03.000000 audit[2840]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.000000 audit[2840]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd997ca1d0 a2=0 a3=7ffd997ca1bc items=0 ppid=2737 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:17:03.003000 audit[2842]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.003000 audit[2842]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3c4e0230 a2=0 a3=7fff3c4e021c items=0 ppid=2737 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:17:03.004000 audit[2843]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.004000 audit[2843]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf59f5e10 a2=0 a3=7ffdf59f5dfc items=0 ppid=2737 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:17:03.007000 audit[2845]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.007000 audit[2845]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd4ddf6650 a2=0 a3=7ffd4ddf663c items=0 ppid=2737 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:17:03.010000 audit[2848]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.010000 audit[2848]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb31bec90 a2=0 a3=7ffdb31bec7c items=0 ppid=2737 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:17:03.013000 audit[2851]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.013000 audit[2851]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdfede1da0 a2=0 a3=7ffdfede1d8c items=0 ppid=2737 pid=2851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:17:03.014000 audit[2852]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.014000 audit[2852]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe52af8c70 a2=0 a3=7ffe52af8c5c items=0 ppid=2737 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:17:03.016000 audit[2854]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.016000 audit[2854]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcd5b6d460 a2=0 a3=7ffcd5b6d44c items=0 ppid=2737 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:17:03.018000 audit[2857]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.018000 audit[2857]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc8158c6d0 a2=0 a3=7ffc8158c6bc items=0 ppid=2737 pid=2857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:17:03.019000 audit[2858]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.019000 audit[2858]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf6fae5a0 a2=0 a3=7ffcf6fae58c items=0 ppid=2737 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:17:03.021000 audit[2860]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:17:03.021000 audit[2860]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff3c527a40 a2=0 a3=7fff3c527a2c items=0 ppid=2737 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:17:03.034000 audit[2866]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:03.034000 audit[2866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffdee01fdc0 a2=0 a3=7ffdee01fdac items=0 ppid=2737 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:03.037000 audit[2866]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:03.037000 audit[2866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdee01fdc0 a2=0 a3=7ffdee01fdac items=0 ppid=2737 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:03.040000 audit[2872]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.040000 audit[2872]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd0de99030 a2=0 a3=7ffd0de9901c items=0 ppid=2737 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:17:03.042000 audit[2874]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2874 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.042000 audit[2874]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff02d53e60 a2=0 a3=7fff02d53e4c items=0 ppid=2737 pid=2874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:17:03.044000 audit[2877]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.044000 audit[2877]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffa43c5630 a2=0 a3=7fffa43c561c items=0 ppid=2737 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.044000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:17:03.045000 audit[2878]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2878 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.045000 audit[2878]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9016efa0 a2=0 a3=7ffc9016ef8c items=0 ppid=2737 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:17:03.047000 audit[2880]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2880 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.047000 audit[2880]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffaa4d4aa0 a2=0 a3=7fffaa4d4a8c items=0 ppid=2737 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:17:03.048000 audit[2881]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.048000 audit[2881]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd16b188a0 a2=0 a3=7ffd16b1888c items=0 ppid=2737 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:17:03.049000 audit[2883]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.049000 audit[2883]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd310307b0 a2=0 a3=7ffd3103079c items=0 ppid=2737 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:17:03.052000 audit[2886]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.052000 audit[2886]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffa4598770 a2=0 a3=7fffa459875c items=0 ppid=2737 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:17:03.053000 audit[2887]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.053000 audit[2887]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff51808d40 a2=0 a3=7fff51808d2c items=0 ppid=2737 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:17:03.055000 audit[2889]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.055000 audit[2889]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd09aba7d0 a2=0 a3=7ffd09aba7bc items=0 ppid=2737 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:17:03.055000 audit[2890]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.055000 audit[2890]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc42a047f0 a2=0 a3=7ffc42a047dc items=0 ppid=2737 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:17:03.058000 audit[2892]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.058000 audit[2892]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff579865a0 a2=0 a3=7fff5798658c items=0 ppid=2737 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:17:03.061000 audit[2895]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.061000 audit[2895]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0c8b7d90 a2=0 a3=7fff0c8b7d7c items=0 ppid=2737 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:17:03.065000 audit[2898]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.065000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff157b8db0 a2=0 a3=7fff157b8d9c items=0 ppid=2737 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:17:03.067000 audit[2899]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.067000 audit[2899]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffca707af0 a2=0 a3=7fffca707adc items=0 ppid=2737 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:17:03.068000 audit[2901]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.068000 audit[2901]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff7d17f710 a2=0 a3=7fff7d17f6fc items=0 ppid=2737 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:17:03.074000 audit[2904]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.074000 audit[2904]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff14e99700 a2=0 a3=7fff14e996ec items=0 ppid=2737 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:17:03.075000 audit[2905]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.075000 audit[2905]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc63ff26f0 a2=0 a3=7ffc63ff26dc items=0 ppid=2737 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:17:03.077000 audit[2907]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.077000 audit[2907]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffb8455410 a2=0 a3=7fffb84553fc items=0 ppid=2737 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:17:03.078000 audit[2908]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.078000 audit[2908]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc5e78490 a2=0 a3=7ffdc5e7847c items=0 ppid=2737 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:17:03.079000 audit[2910]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.079000 audit[2910]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd21e66510 a2=0 a3=7ffd21e664fc items=0 ppid=2737 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:17:03.082000 audit[2913]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:17:03.082000 audit[2913]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0eb6c3c0 a2=0 a3=7ffe0eb6c3ac items=0 ppid=2737 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:17:03.084000 audit[2915]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:17:03.084000 audit[2915]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc121e3b80 a2=0 a3=7ffc121e3b6c items=0 ppid=2737 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.084000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:03.084000 audit[2915]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:17:03.084000 audit[2915]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc121e3b80 a2=0 a3=7ffc121e3b6c items=0 ppid=2737 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:03.084000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:03.211325 systemd[1]: run-containerd-runc-k8s.io-53c45764a9594f3f097be6b17d170f54f1a167af86ea3670d49ab2cf00032f92-runc.tBlrlu.mount: Deactivated successfully. Jun 25 16:17:03.968611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239634078.mount: Deactivated successfully. Jun 25 16:17:04.632228 containerd[1445]: time="2024-06-25T16:17:04.632203889Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:04.632560 containerd[1445]: time="2024-06-25T16:17:04.632537966Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076048" Jun 25 16:17:04.633131 containerd[1445]: time="2024-06-25T16:17:04.633119581Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:04.634155 containerd[1445]: time="2024-06-25T16:17:04.634144432Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:04.635078 containerd[1445]: time="2024-06-25T16:17:04.635066217Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:04.635563 containerd[1445]: time="2024-06-25T16:17:04.635549235Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.994495065s" Jun 25 16:17:04.635628 containerd[1445]: time="2024-06-25T16:17:04.635606644Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:17:04.638124 containerd[1445]: time="2024-06-25T16:17:04.638106820Z" level=info msg="CreateContainer within sandbox \"cb5ec4c646e79830f65274162227c480602a5520ef37510a14417bb6313e1787\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:17:04.696611 containerd[1445]: time="2024-06-25T16:17:04.696567503Z" level=info msg="CreateContainer within sandbox \"cb5ec4c646e79830f65274162227c480602a5520ef37510a14417bb6313e1787\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"93201ca7056ea0a43a06fb3e8a8fddac10dd7a1cb186f195b7edf4ad60673b8d\"" Jun 25 16:17:04.697059 containerd[1445]: time="2024-06-25T16:17:04.697045000Z" level=info msg="StartContainer for \"93201ca7056ea0a43a06fb3e8a8fddac10dd7a1cb186f195b7edf4ad60673b8d\"" Jun 25 16:17:04.756638 containerd[1445]: time="2024-06-25T16:17:04.756480136Z" level=info msg="StartContainer for \"93201ca7056ea0a43a06fb3e8a8fddac10dd7a1cb186f195b7edf4ad60673b8d\" returns successfully" Jun 25 16:17:05.084183 kubelet[2601]: I0625 16:17:05.084163 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9mx4l" podStartSLOduration=4.08413538 podCreationTimestamp="2024-06-25 16:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:17:03.072358361 +0000 UTC m=+15.189075217" watchObservedRunningTime="2024-06-25 16:17:05.08413538 +0000 UTC m=+17.200852231" Jun 25 16:17:05.085100 kubelet[2601]: I0625 16:17:05.085090 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-8c86m" podStartSLOduration=1.089037738 podCreationTimestamp="2024-06-25 16:17:02 +0000 UTC" firstStartedPulling="2024-06-25 16:17:02.639777458 +0000 UTC m=+14.756494303" lastFinishedPulling="2024-06-25 16:17:04.635809808 +0000 UTC m=+16.752526653" observedRunningTime="2024-06-25 16:17:05.08306875 +0000 UTC m=+17.199785614" watchObservedRunningTime="2024-06-25 16:17:05.085070088 +0000 UTC m=+17.201786945" Jun 25 16:17:05.664071 systemd[1]: run-containerd-runc-k8s.io-93201ca7056ea0a43a06fb3e8a8fddac10dd7a1cb186f195b7edf4ad60673b8d-runc.KHVNkI.mount: Deactivated successfully. Jun 25 16:17:07.563000 audit[2965]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:07.563000 audit[2965]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff36da60a0 a2=0 a3=7fff36da608c items=0 ppid=2737 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:07.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:07.564000 audit[2965]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:07.564000 audit[2965]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff36da60a0 a2=0 a3=0 items=0 ppid=2737 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:07.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:07.576000 audit[2967]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:07.576000 audit[2967]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe8141fe70 a2=0 a3=7ffe8141fe5c items=0 ppid=2737 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:07.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:07.578000 audit[2967]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:07.578000 audit[2967]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8141fe70 a2=0 a3=0 items=0 ppid=2737 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:07.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:07.734024 kubelet[2601]: I0625 16:17:07.734006 2601 topology_manager.go:215] "Topology Admit Handler" podUID="01636fbb-1a08-4411-8d4f-6823b76f886a" podNamespace="calico-system" podName="calico-typha-65464cb47d-rr2wb" Jun 25 16:17:07.770379 kubelet[2601]: I0625 16:17:07.770362 2601 topology_manager.go:215] "Topology Admit Handler" podUID="063c8763-33ac-4d82-bf4e-b1e1d6547e19" podNamespace="calico-system" podName="calico-node-9q7hm" Jun 25 16:17:07.774733 kubelet[2601]: W0625 16:17:07.774264 2601 reflector.go:535] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jun 25 16:17:07.774733 kubelet[2601]: E0625 16:17:07.774292 2601 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jun 25 16:17:07.774733 kubelet[2601]: W0625 16:17:07.774402 2601 reflector.go:535] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jun 25 16:17:07.774733 kubelet[2601]: E0625 16:17:07.774411 2601 reflector.go:147] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jun 25 16:17:07.816179 kubelet[2601]: I0625 16:17:07.816113 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01636fbb-1a08-4411-8d4f-6823b76f886a-tigera-ca-bundle\") pod \"calico-typha-65464cb47d-rr2wb\" (UID: \"01636fbb-1a08-4411-8d4f-6823b76f886a\") " pod="calico-system/calico-typha-65464cb47d-rr2wb" Jun 25 16:17:07.816302 kubelet[2601]: I0625 16:17:07.816294 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/01636fbb-1a08-4411-8d4f-6823b76f886a-typha-certs\") pod \"calico-typha-65464cb47d-rr2wb\" (UID: \"01636fbb-1a08-4411-8d4f-6823b76f886a\") " pod="calico-system/calico-typha-65464cb47d-rr2wb" Jun 25 16:17:07.816357 kubelet[2601]: I0625 16:17:07.816352 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwsc\" (UniqueName: \"kubernetes.io/projected/01636fbb-1a08-4411-8d4f-6823b76f886a-kube-api-access-7fwsc\") pod \"calico-typha-65464cb47d-rr2wb\" (UID: \"01636fbb-1a08-4411-8d4f-6823b76f886a\") " pod="calico-system/calico-typha-65464cb47d-rr2wb" Jun 25 16:17:07.879832 kubelet[2601]: I0625 16:17:07.879810 2601 topology_manager.go:215] "Topology Admit Handler" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" podNamespace="calico-system" podName="csi-node-driver-sdrw2" Jun 25 16:17:07.880113 kubelet[2601]: E0625 16:17:07.880103 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdrw2" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" Jun 25 16:17:07.917271 kubelet[2601]: I0625 16:17:07.917251 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-cni-bin-dir\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917423 kubelet[2601]: I0625 16:17:07.917414 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-cni-log-dir\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917512 kubelet[2601]: I0625 16:17:07.917504 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-policysync\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917596 kubelet[2601]: I0625 16:17:07.917590 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-lib-modules\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917671 kubelet[2601]: I0625 16:17:07.917664 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-xtables-lock\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917742 kubelet[2601]: I0625 16:17:07.917735 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063c8763-33ac-4d82-bf4e-b1e1d6547e19-tigera-ca-bundle\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917804 kubelet[2601]: I0625 16:17:07.917798 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-var-lib-calico\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917865 kubelet[2601]: I0625 16:17:07.917859 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-cni-net-dir\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.917936 kubelet[2601]: I0625 16:17:07.917929 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-var-run-calico\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.918002 kubelet[2601]: I0625 16:17:07.917996 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/063c8763-33ac-4d82-bf4e-b1e1d6547e19-node-certs\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.918067 kubelet[2601]: I0625 16:17:07.918061 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/063c8763-33ac-4d82-bf4e-b1e1d6547e19-flexvol-driver-host\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:07.918131 kubelet[2601]: I0625 16:17:07.918124 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghrc\" (UniqueName: \"kubernetes.io/projected/063c8763-33ac-4d82-bf4e-b1e1d6547e19-kube-api-access-hghrc\") pod \"calico-node-9q7hm\" (UID: \"063c8763-33ac-4d82-bf4e-b1e1d6547e19\") " pod="calico-system/calico-node-9q7hm" Jun 25 16:17:08.018708 kubelet[2601]: I0625 16:17:08.018682 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/18093c76-756e-42a9-853b-9dc1cb0c45f6-registration-dir\") pod \"csi-node-driver-sdrw2\" (UID: \"18093c76-756e-42a9-853b-9dc1cb0c45f6\") " pod="calico-system/csi-node-driver-sdrw2" Jun 25 16:17:08.034868 kubelet[2601]: E0625 16:17:08.034845 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.035001 kubelet[2601]: W0625 16:17:08.034989 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.035286 kubelet[2601]: E0625 16:17:08.035265 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.035286 kubelet[2601]: W0625 16:17:08.035278 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.042674 kubelet[2601]: E0625 16:17:08.042644 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.042805 kubelet[2601]: E0625 16:17:08.042644 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.043194 kubelet[2601]: E0625 16:17:08.043182 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.043234 kubelet[2601]: W0625 16:17:08.043199 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.043234 kubelet[2601]: E0625 16:17:08.043213 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.043907 kubelet[2601]: E0625 16:17:08.043873 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.043907 kubelet[2601]: W0625 16:17:08.043882 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.043907 kubelet[2601]: E0625 16:17:08.043894 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.044416 containerd[1445]: time="2024-06-25T16:17:08.044386287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65464cb47d-rr2wb,Uid:01636fbb-1a08-4411-8d4f-6823b76f886a,Namespace:calico-system,Attempt:0,}" Jun 25 16:17:08.044773 kubelet[2601]: E0625 16:17:08.044752 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.044773 kubelet[2601]: W0625 16:17:08.044760 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.044773 kubelet[2601]: E0625 16:17:08.044773 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.045063 kubelet[2601]: E0625 16:17:08.045053 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.045063 kubelet[2601]: W0625 16:17:08.045061 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.045127 kubelet[2601]: E0625 16:17:08.045078 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.045221 kubelet[2601]: E0625 16:17:08.045210 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.045221 kubelet[2601]: W0625 16:17:08.045217 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.045279 kubelet[2601]: E0625 16:17:08.045226 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.045347 kubelet[2601]: E0625 16:17:08.045338 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.045383 kubelet[2601]: W0625 16:17:08.045345 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.045383 kubelet[2601]: E0625 16:17:08.045360 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.045503 kubelet[2601]: E0625 16:17:08.045495 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.045503 kubelet[2601]: W0625 16:17:08.045501 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.045563 kubelet[2601]: E0625 16:17:08.045507 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.045682 kubelet[2601]: E0625 16:17:08.045674 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.045682 kubelet[2601]: W0625 16:17:08.045680 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.045743 kubelet[2601]: E0625 16:17:08.045689 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.045743 kubelet[2601]: I0625 16:17:08.045703 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/18093c76-756e-42a9-853b-9dc1cb0c45f6-varrun\") pod \"csi-node-driver-sdrw2\" (UID: \"18093c76-756e-42a9-853b-9dc1cb0c45f6\") " pod="calico-system/csi-node-driver-sdrw2" Jun 25 16:17:08.045858 kubelet[2601]: E0625 16:17:08.045849 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.045858 kubelet[2601]: W0625 16:17:08.045856 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.045909 kubelet[2601]: E0625 16:17:08.045864 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.045909 kubelet[2601]: I0625 16:17:08.045874 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18093c76-756e-42a9-853b-9dc1cb0c45f6-kubelet-dir\") pod \"csi-node-driver-sdrw2\" (UID: \"18093c76-756e-42a9-853b-9dc1cb0c45f6\") " pod="calico-system/csi-node-driver-sdrw2" Jun 25 16:17:08.046030 kubelet[2601]: E0625 16:17:08.046018 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.046030 kubelet[2601]: W0625 16:17:08.046026 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.046088 kubelet[2601]: E0625 16:17:08.046033 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.046198 kubelet[2601]: E0625 16:17:08.046188 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.046198 kubelet[2601]: W0625 16:17:08.046195 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.046247 kubelet[2601]: E0625 16:17:08.046202 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.046390 kubelet[2601]: E0625 16:17:08.046379 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.046390 kubelet[2601]: W0625 16:17:08.046386 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.046446 kubelet[2601]: E0625 16:17:08.046393 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.046550 kubelet[2601]: E0625 16:17:08.046534 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.046550 kubelet[2601]: W0625 16:17:08.046540 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.046550 kubelet[2601]: E0625 16:17:08.046547 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.046705 kubelet[2601]: E0625 16:17:08.046692 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.046705 kubelet[2601]: W0625 16:17:08.046698 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.046766 kubelet[2601]: E0625 16:17:08.046710 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.046766 kubelet[2601]: I0625 16:17:08.046722 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26zk\" (UniqueName: \"kubernetes.io/projected/18093c76-756e-42a9-853b-9dc1cb0c45f6-kube-api-access-v26zk\") pod \"csi-node-driver-sdrw2\" (UID: \"18093c76-756e-42a9-853b-9dc1cb0c45f6\") " pod="calico-system/csi-node-driver-sdrw2" Jun 25 16:17:08.046898 kubelet[2601]: E0625 16:17:08.046884 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.046898 kubelet[2601]: W0625 16:17:08.046890 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.046898 kubelet[2601]: E0625 16:17:08.046897 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.047056 kubelet[2601]: E0625 16:17:08.047045 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.047056 kubelet[2601]: W0625 16:17:08.047051 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.047111 kubelet[2601]: E0625 16:17:08.047059 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.047243 kubelet[2601]: E0625 16:17:08.047216 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.047243 kubelet[2601]: W0625 16:17:08.047222 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.047243 kubelet[2601]: E0625 16:17:08.047234 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.047716 kubelet[2601]: E0625 16:17:08.047687 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.047716 kubelet[2601]: W0625 16:17:08.047695 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.047716 kubelet[2601]: E0625 16:17:08.047707 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.047794 kubelet[2601]: I0625 16:17:08.047720 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/18093c76-756e-42a9-853b-9dc1cb0c45f6-socket-dir\") pod \"csi-node-driver-sdrw2\" (UID: \"18093c76-756e-42a9-853b-9dc1cb0c45f6\") " pod="calico-system/csi-node-driver-sdrw2" Jun 25 16:17:08.049111 kubelet[2601]: E0625 16:17:08.049086 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.049111 kubelet[2601]: W0625 16:17:08.049105 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.049279 kubelet[2601]: E0625 16:17:08.049268 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.049497 kubelet[2601]: E0625 16:17:08.049487 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.049497 kubelet[2601]: W0625 16:17:08.049494 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.049675 kubelet[2601]: E0625 16:17:08.049615 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.049675 kubelet[2601]: W0625 16:17:08.049642 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.056542 kubelet[2601]: E0625 16:17:08.049720 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.056542 kubelet[2601]: W0625 16:17:08.049724 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.056542 kubelet[2601]: E0625 16:17:08.049786 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.056542 kubelet[2601]: E0625 16:17:08.049797 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.056542 kubelet[2601]: W0625 16:17:08.049801 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.056542 kubelet[2601]: E0625 16:17:08.049802 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.056542 kubelet[2601]: E0625 16:17:08.049809 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.056542 kubelet[2601]: E0625 16:17:08.049815 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.056542 kubelet[2601]: E0625 16:17:08.049882 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.056542 kubelet[2601]: W0625 16:17:08.049886 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.057118 kubelet[2601]: E0625 16:17:08.049922 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.057118 kubelet[2601]: E0625 16:17:08.050007 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.057118 kubelet[2601]: W0625 16:17:08.050012 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.057118 kubelet[2601]: E0625 16:17:08.050026 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.057118 kubelet[2601]: E0625 16:17:08.050102 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.057118 kubelet[2601]: W0625 16:17:08.050107 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.057118 kubelet[2601]: E0625 16:17:08.050124 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.057118 kubelet[2601]: E0625 16:17:08.050213 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.057118 kubelet[2601]: W0625 16:17:08.050217 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.057118 kubelet[2601]: E0625 16:17:08.050232 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.067853 kubelet[2601]: E0625 16:17:08.050328 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.067853 kubelet[2601]: W0625 16:17:08.050332 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.067853 kubelet[2601]: E0625 16:17:08.050358 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.067853 kubelet[2601]: E0625 16:17:08.050457 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.067853 kubelet[2601]: W0625 16:17:08.050462 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.067853 kubelet[2601]: E0625 16:17:08.050480 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.067853 kubelet[2601]: E0625 16:17:08.050559 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.067853 kubelet[2601]: W0625 16:17:08.050565 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.067853 kubelet[2601]: E0625 16:17:08.050578 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.067853 kubelet[2601]: E0625 16:17:08.050688 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068035 kubelet[2601]: W0625 16:17:08.050692 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068035 kubelet[2601]: E0625 16:17:08.050706 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068035 kubelet[2601]: E0625 16:17:08.050794 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068035 kubelet[2601]: W0625 16:17:08.050798 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068035 kubelet[2601]: E0625 16:17:08.050872 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068035 kubelet[2601]: E0625 16:17:08.050928 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068035 kubelet[2601]: W0625 16:17:08.050934 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068035 kubelet[2601]: E0625 16:17:08.050969 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068035 kubelet[2601]: E0625 16:17:08.051137 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068035 kubelet[2601]: W0625 16:17:08.051142 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068209 kubelet[2601]: E0625 16:17:08.051157 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068209 kubelet[2601]: E0625 16:17:08.051239 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068209 kubelet[2601]: W0625 16:17:08.051243 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068209 kubelet[2601]: E0625 16:17:08.051256 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068209 kubelet[2601]: E0625 16:17:08.051343 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068209 kubelet[2601]: W0625 16:17:08.051347 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068209 kubelet[2601]: E0625 16:17:08.051361 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068209 kubelet[2601]: E0625 16:17:08.051453 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068209 kubelet[2601]: W0625 16:17:08.051457 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068209 kubelet[2601]: E0625 16:17:08.051467 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068376 kubelet[2601]: E0625 16:17:08.051585 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068376 kubelet[2601]: W0625 16:17:08.051589 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068376 kubelet[2601]: E0625 16:17:08.051600 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068376 kubelet[2601]: E0625 16:17:08.051726 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068376 kubelet[2601]: W0625 16:17:08.051732 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068376 kubelet[2601]: E0625 16:17:08.051745 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068376 kubelet[2601]: E0625 16:17:08.051869 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068376 kubelet[2601]: W0625 16:17:08.051874 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068376 kubelet[2601]: E0625 16:17:08.051890 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068376 kubelet[2601]: E0625 16:17:08.051986 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068559 kubelet[2601]: W0625 16:17:08.052003 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068559 kubelet[2601]: E0625 16:17:08.052017 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068559 kubelet[2601]: E0625 16:17:08.052109 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068559 kubelet[2601]: W0625 16:17:08.052113 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068559 kubelet[2601]: E0625 16:17:08.052122 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068559 kubelet[2601]: E0625 16:17:08.052208 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068559 kubelet[2601]: W0625 16:17:08.054783 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068559 kubelet[2601]: E0625 16:17:08.054810 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068559 kubelet[2601]: E0625 16:17:08.055000 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068559 kubelet[2601]: W0625 16:17:08.055006 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068918 kubelet[2601]: E0625 16:17:08.055026 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068918 kubelet[2601]: E0625 16:17:08.055143 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068918 kubelet[2601]: W0625 16:17:08.055147 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068918 kubelet[2601]: E0625 16:17:08.055164 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.068918 kubelet[2601]: E0625 16:17:08.055252 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068918 kubelet[2601]: W0625 16:17:08.055258 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068918 kubelet[2601]: E0625 16:17:08.055343 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068918 kubelet[2601]: W0625 16:17:08.055348 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068918 kubelet[2601]: E0625 16:17:08.055418 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.068918 kubelet[2601]: W0625 16:17:08.055423 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.068918 kubelet[2601]: E0625 16:17:08.055489 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069128 kubelet[2601]: W0625 16:17:08.055493 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069128 kubelet[2601]: E0625 16:17:08.055581 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069128 kubelet[2601]: W0625 16:17:08.055585 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069128 kubelet[2601]: E0625 16:17:08.055677 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069128 kubelet[2601]: W0625 16:17:08.055693 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069128 kubelet[2601]: E0625 16:17:08.055701 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069128 kubelet[2601]: E0625 16:17:08.055799 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069128 kubelet[2601]: W0625 16:17:08.055803 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069128 kubelet[2601]: E0625 16:17:08.055809 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069128 kubelet[2601]: E0625 16:17:08.056296 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069299 kubelet[2601]: W0625 16:17:08.056300 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069299 kubelet[2601]: E0625 16:17:08.056307 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069299 kubelet[2601]: E0625 16:17:08.056395 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069299 kubelet[2601]: W0625 16:17:08.056400 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069299 kubelet[2601]: E0625 16:17:08.056406 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069299 kubelet[2601]: E0625 16:17:08.057244 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069299 kubelet[2601]: W0625 16:17:08.057250 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069299 kubelet[2601]: E0625 16:17:08.057260 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069299 kubelet[2601]: E0625 16:17:08.057371 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069299 kubelet[2601]: W0625 16:17:08.057376 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057383 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057392 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057463 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069521 kubelet[2601]: W0625 16:17:08.057468 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057473 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057553 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069521 kubelet[2601]: W0625 16:17:08.057557 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057563 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057570 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069521 kubelet[2601]: E0625 16:17:08.057661 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069738 kubelet[2601]: W0625 16:17:08.057666 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069738 kubelet[2601]: E0625 16:17:08.057672 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069738 kubelet[2601]: E0625 16:17:08.057754 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069738 kubelet[2601]: W0625 16:17:08.057758 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069738 kubelet[2601]: E0625 16:17:08.057764 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069738 kubelet[2601]: E0625 16:17:08.057770 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069738 kubelet[2601]: E0625 16:17:08.057837 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069738 kubelet[2601]: W0625 16:17:08.057841 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069738 kubelet[2601]: E0625 16:17:08.057847 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069738 kubelet[2601]: E0625 16:17:08.057926 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069909 kubelet[2601]: W0625 16:17:08.057930 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069909 kubelet[2601]: E0625 16:17:08.057935 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069909 kubelet[2601]: E0625 16:17:08.057963 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069909 kubelet[2601]: E0625 16:17:08.058033 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069909 kubelet[2601]: W0625 16:17:08.058037 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069909 kubelet[2601]: E0625 16:17:08.058043 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069909 kubelet[2601]: E0625 16:17:08.058110 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.069909 kubelet[2601]: W0625 16:17:08.058113 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.069909 kubelet[2601]: E0625 16:17:08.058119 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.069909 kubelet[2601]: E0625 16:17:08.058140 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.097363 containerd[1445]: time="2024-06-25T16:17:08.096877653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:08.097363 containerd[1445]: time="2024-06-25T16:17:08.096923900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:08.097363 containerd[1445]: time="2024-06-25T16:17:08.096934100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:08.097363 containerd[1445]: time="2024-06-25T16:17:08.096939794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:08.153710 kubelet[2601]: E0625 16:17:08.153661 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.153710 kubelet[2601]: W0625 16:17:08.153675 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.153710 kubelet[2601]: E0625 16:17:08.153690 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.153931 kubelet[2601]: E0625 16:17:08.153922 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.153931 kubelet[2601]: W0625 16:17:08.153929 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.153982 kubelet[2601]: E0625 16:17:08.153936 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154029 kubelet[2601]: E0625 16:17:08.154020 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154029 kubelet[2601]: W0625 16:17:08.154027 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154082 kubelet[2601]: E0625 16:17:08.154038 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154157 kubelet[2601]: E0625 16:17:08.154148 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154157 kubelet[2601]: W0625 16:17:08.154155 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154198 kubelet[2601]: E0625 16:17:08.154163 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154265 kubelet[2601]: E0625 16:17:08.154257 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154289 kubelet[2601]: W0625 16:17:08.154263 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154289 kubelet[2601]: E0625 16:17:08.154278 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154393 kubelet[2601]: E0625 16:17:08.154380 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154393 kubelet[2601]: W0625 16:17:08.154390 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154438 kubelet[2601]: E0625 16:17:08.154397 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154504 kubelet[2601]: E0625 16:17:08.154497 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154504 kubelet[2601]: W0625 16:17:08.154503 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154555 kubelet[2601]: E0625 16:17:08.154512 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154595 kubelet[2601]: E0625 16:17:08.154587 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154595 kubelet[2601]: W0625 16:17:08.154594 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154659 kubelet[2601]: E0625 16:17:08.154601 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154695 kubelet[2601]: E0625 16:17:08.154683 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154695 kubelet[2601]: W0625 16:17:08.154689 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154743 kubelet[2601]: E0625 16:17:08.154697 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.154796 kubelet[2601]: E0625 16:17:08.154788 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.154796 kubelet[2601]: W0625 16:17:08.154794 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.154838 kubelet[2601]: E0625 16:17:08.154806 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.155017 kubelet[2601]: E0625 16:17:08.155006 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.155017 kubelet[2601]: W0625 16:17:08.155015 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.155068 kubelet[2601]: E0625 16:17:08.155028 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.155114 kubelet[2601]: E0625 16:17:08.155106 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.155114 kubelet[2601]: W0625 16:17:08.155112 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.155162 kubelet[2601]: E0625 16:17:08.155122 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.157680 kubelet[2601]: E0625 16:17:08.157668 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.157680 kubelet[2601]: W0625 16:17:08.157676 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.157823 kubelet[2601]: E0625 16:17:08.157763 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.157823 kubelet[2601]: E0625 16:17:08.157767 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.157823 kubelet[2601]: W0625 16:17:08.157785 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.157930 kubelet[2601]: E0625 16:17:08.157865 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.157930 kubelet[2601]: W0625 16:17:08.157869 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.157972 kubelet[2601]: E0625 16:17:08.157940 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.157972 kubelet[2601]: W0625 16:17:08.157944 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.157972 kubelet[2601]: E0625 16:17:08.157951 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.158033 kubelet[2601]: E0625 16:17:08.158023 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.158033 kubelet[2601]: W0625 16:17:08.158027 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.158033 kubelet[2601]: E0625 16:17:08.158033 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.158124 kubelet[2601]: E0625 16:17:08.158099 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.158124 kubelet[2601]: W0625 16:17:08.158103 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.158124 kubelet[2601]: E0625 16:17:08.158110 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.158228 kubelet[2601]: E0625 16:17:08.158201 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.158228 kubelet[2601]: W0625 16:17:08.158207 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.158228 kubelet[2601]: E0625 16:17:08.158213 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.158228 kubelet[2601]: E0625 16:17:08.158227 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.158312 kubelet[2601]: E0625 16:17:08.158297 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.158312 kubelet[2601]: W0625 16:17:08.158301 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.158312 kubelet[2601]: E0625 16:17:08.158306 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.159832 kubelet[2601]: E0625 16:17:08.158373 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.159832 kubelet[2601]: W0625 16:17:08.158380 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.159832 kubelet[2601]: E0625 16:17:08.158388 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.159832 kubelet[2601]: E0625 16:17:08.158483 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.159832 kubelet[2601]: W0625 16:17:08.158487 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.159832 kubelet[2601]: E0625 16:17:08.158492 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.159832 kubelet[2601]: E0625 16:17:08.158501 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.159832 kubelet[2601]: E0625 16:17:08.158567 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.159832 kubelet[2601]: W0625 16:17:08.158572 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.159832 kubelet[2601]: E0625 16:17:08.158577 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.161972 kubelet[2601]: E0625 16:17:08.161700 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.161972 kubelet[2601]: W0625 16:17:08.161710 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.161972 kubelet[2601]: E0625 16:17:08.161722 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.161972 kubelet[2601]: E0625 16:17:08.161834 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.161972 kubelet[2601]: W0625 16:17:08.161840 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.161972 kubelet[2601]: E0625 16:17:08.161846 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.161972 kubelet[2601]: E0625 16:17:08.161937 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.161972 kubelet[2601]: W0625 16:17:08.161941 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.161972 kubelet[2601]: E0625 16:17:08.161947 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.167852 kubelet[2601]: E0625 16:17:08.167797 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.167852 kubelet[2601]: W0625 16:17:08.167810 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.167852 kubelet[2601]: E0625 16:17:08.167826 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.183057 containerd[1445]: time="2024-06-25T16:17:08.183023321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65464cb47d-rr2wb,Uid:01636fbb-1a08-4411-8d4f-6823b76f886a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f32f535dd0d89d9d63030c561309d5f1273a7c32f548b2c3c8d46bb2582bc2f\"" Jun 25 16:17:08.183964 containerd[1445]: time="2024-06-25T16:17:08.183948231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:17:08.255277 kubelet[2601]: E0625 16:17:08.255261 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.255406 kubelet[2601]: W0625 16:17:08.255389 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.255463 kubelet[2601]: E0625 16:17:08.255457 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.356311 kubelet[2601]: E0625 16:17:08.356248 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.356311 kubelet[2601]: W0625 16:17:08.356262 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.356311 kubelet[2601]: E0625 16:17:08.356278 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.457542 kubelet[2601]: E0625 16:17:08.457523 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.457542 kubelet[2601]: W0625 16:17:08.457535 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.457542 kubelet[2601]: E0625 16:17:08.457549 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.558502 kubelet[2601]: E0625 16:17:08.558475 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.558502 kubelet[2601]: W0625 16:17:08.558495 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.558687 kubelet[2601]: E0625 16:17:08.558513 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.585000 audit[3112]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:08.587623 kernel: kauditd_printk_skb: 155 callbacks suppressed Jun 25 16:17:08.587686 kernel: audit: type=1325 audit(1719332228.585:284): table=filter:93 family=2 entries=16 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:08.587710 kernel: audit: type=1300 audit(1719332228.585:284): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffeb5d6c950 a2=0 a3=7ffeb5d6c93c items=0 ppid=2737 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:08.585000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffeb5d6c950 a2=0 a3=7ffeb5d6c93c items=0 ppid=2737 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:08.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:08.591093 kernel: audit: type=1327 audit(1719332228.585:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:08.590000 audit[3112]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:08.592554 kernel: audit: type=1325 audit(1719332228.590:285): table=nat:94 family=2 entries=12 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:08.590000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb5d6c950 a2=0 a3=0 items=0 ppid=2737 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:08.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:08.596331 kernel: audit: type=1300 audit(1719332228.590:285): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb5d6c950 a2=0 a3=0 items=0 ppid=2737 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:08.596370 kernel: audit: type=1327 audit(1719332228.590:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:08.640438 kubelet[2601]: E0625 16:17:08.640386 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:08.640438 kubelet[2601]: W0625 16:17:08.640401 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:08.640438 kubelet[2601]: E0625 16:17:08.640418 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:08.673288 containerd[1445]: time="2024-06-25T16:17:08.673257394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9q7hm,Uid:063c8763-33ac-4d82-bf4e-b1e1d6547e19,Namespace:calico-system,Attempt:0,}" Jun 25 16:17:08.690169 containerd[1445]: time="2024-06-25T16:17:08.690110415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:08.690290 containerd[1445]: time="2024-06-25T16:17:08.690156995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:08.690357 containerd[1445]: time="2024-06-25T16:17:08.690327554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:08.690430 containerd[1445]: time="2024-06-25T16:17:08.690349153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:08.725002 containerd[1445]: time="2024-06-25T16:17:08.724973588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9q7hm,Uid:063c8763-33ac-4d82-bf4e-b1e1d6547e19,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e61bf9c1738d7101a94512660f66109d146d221f2b4114598a6fdb5f699606f\"" Jun 25 16:17:10.002571 kubelet[2601]: E0625 16:17:10.002550 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdrw2" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" Jun 25 16:17:10.401121 containerd[1445]: time="2024-06-25T16:17:10.401050842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:10.405763 containerd[1445]: time="2024-06-25T16:17:10.405727167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:17:10.419568 containerd[1445]: time="2024-06-25T16:17:10.419550983Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:10.426370 containerd[1445]: time="2024-06-25T16:17:10.426350100Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:10.442595 containerd[1445]: time="2024-06-25T16:17:10.442572318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:10.442958 containerd[1445]: time="2024-06-25T16:17:10.442938021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.258971586s" Jun 25 16:17:10.442998 containerd[1445]: time="2024-06-25T16:17:10.442960981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:17:10.444771 containerd[1445]: time="2024-06-25T16:17:10.444757400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:17:10.465773 containerd[1445]: time="2024-06-25T16:17:10.465751242Z" level=info msg="CreateContainer within sandbox \"3f32f535dd0d89d9d63030c561309d5f1273a7c32f548b2c3c8d46bb2582bc2f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:17:10.534465 containerd[1445]: time="2024-06-25T16:17:10.534435837Z" level=info msg="CreateContainer within sandbox \"3f32f535dd0d89d9d63030c561309d5f1273a7c32f548b2c3c8d46bb2582bc2f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6571b58eeac70366797acd78843c5ace14536de398e1b47ccf1967448cbb8523\"" Jun 25 16:17:10.535878 containerd[1445]: time="2024-06-25T16:17:10.534936258Z" level=info msg="StartContainer for \"6571b58eeac70366797acd78843c5ace14536de398e1b47ccf1967448cbb8523\"" Jun 25 16:17:10.581090 containerd[1445]: time="2024-06-25T16:17:10.581060661Z" level=info msg="StartContainer for \"6571b58eeac70366797acd78843c5ace14536de398e1b47ccf1967448cbb8523\" returns successfully" Jun 25 16:17:11.086263 kubelet[2601]: E0625 16:17:11.086191 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.086263 kubelet[2601]: W0625 16:17:11.086203 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.086263 kubelet[2601]: E0625 16:17:11.086214 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.086746 kubelet[2601]: E0625 16:17:11.086568 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.086746 kubelet[2601]: W0625 16:17:11.086575 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.086746 kubelet[2601]: E0625 16:17:11.086582 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.086746 kubelet[2601]: E0625 16:17:11.086698 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.086746 kubelet[2601]: W0625 16:17:11.086703 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.086746 kubelet[2601]: E0625 16:17:11.086709 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.086950 kubelet[2601]: E0625 16:17:11.086893 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.086950 kubelet[2601]: W0625 16:17:11.086899 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.086950 kubelet[2601]: E0625 16:17:11.086905 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.087110 kubelet[2601]: E0625 16:17:11.087042 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.087110 kubelet[2601]: W0625 16:17:11.087047 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.087110 kubelet[2601]: E0625 16:17:11.087065 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.087344 kubelet[2601]: E0625 16:17:11.087207 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.087344 kubelet[2601]: W0625 16:17:11.087212 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.087344 kubelet[2601]: E0625 16:17:11.087218 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.087344 kubelet[2601]: E0625 16:17:11.087300 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.087344 kubelet[2601]: W0625 16:17:11.087304 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.087344 kubelet[2601]: E0625 16:17:11.087310 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.087534 kubelet[2601]: E0625 16:17:11.087483 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.087534 kubelet[2601]: W0625 16:17:11.087488 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.087534 kubelet[2601]: E0625 16:17:11.087494 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.092369 kubelet[2601]: E0625 16:17:11.087631 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.092369 kubelet[2601]: W0625 16:17:11.087635 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.092369 kubelet[2601]: E0625 16:17:11.087642 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.092369 kubelet[2601]: E0625 16:17:11.088303 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.092369 kubelet[2601]: W0625 16:17:11.088309 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.092369 kubelet[2601]: E0625 16:17:11.088317 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.092369 kubelet[2601]: E0625 16:17:11.088409 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.092369 kubelet[2601]: W0625 16:17:11.088414 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.092369 kubelet[2601]: E0625 16:17:11.088420 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.092369 kubelet[2601]: E0625 16:17:11.088513 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.092561 kubelet[2601]: W0625 16:17:11.088517 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.092561 kubelet[2601]: E0625 16:17:11.088523 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.092561 kubelet[2601]: E0625 16:17:11.088611 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.092561 kubelet[2601]: W0625 16:17:11.088615 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.092561 kubelet[2601]: E0625 16:17:11.088629 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.092561 kubelet[2601]: E0625 16:17:11.088716 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.092561 kubelet[2601]: W0625 16:17:11.088720 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.092561 kubelet[2601]: E0625 16:17:11.088726 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.092561 kubelet[2601]: E0625 16:17:11.088834 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.092561 kubelet[2601]: W0625 16:17:11.088839 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.099019 kubelet[2601]: E0625 16:17:11.088845 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.173879 kubelet[2601]: E0625 16:17:11.173862 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.173879 kubelet[2601]: W0625 16:17:11.173875 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.174001 kubelet[2601]: E0625 16:17:11.173891 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.174001 kubelet[2601]: E0625 16:17:11.173995 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.174001 kubelet[2601]: W0625 16:17:11.173999 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180197 kubelet[2601]: E0625 16:17:11.174007 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180197 kubelet[2601]: E0625 16:17:11.174083 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180197 kubelet[2601]: W0625 16:17:11.174087 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180197 kubelet[2601]: E0625 16:17:11.174094 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180197 kubelet[2601]: E0625 16:17:11.174197 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180197 kubelet[2601]: W0625 16:17:11.174204 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180197 kubelet[2601]: E0625 16:17:11.174217 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180197 kubelet[2601]: E0625 16:17:11.174317 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180197 kubelet[2601]: W0625 16:17:11.174321 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180197 kubelet[2601]: E0625 16:17:11.174329 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180376 kubelet[2601]: E0625 16:17:11.174418 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180376 kubelet[2601]: W0625 16:17:11.174422 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180376 kubelet[2601]: E0625 16:17:11.174435 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180376 kubelet[2601]: E0625 16:17:11.174529 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180376 kubelet[2601]: W0625 16:17:11.174533 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180376 kubelet[2601]: E0625 16:17:11.174543 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180376 kubelet[2601]: E0625 16:17:11.174689 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180376 kubelet[2601]: W0625 16:17:11.174694 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180376 kubelet[2601]: E0625 16:17:11.174706 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180376 kubelet[2601]: E0625 16:17:11.174819 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180551 kubelet[2601]: W0625 16:17:11.174823 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180551 kubelet[2601]: E0625 16:17:11.174833 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180551 kubelet[2601]: E0625 16:17:11.174934 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180551 kubelet[2601]: W0625 16:17:11.174941 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180551 kubelet[2601]: E0625 16:17:11.174953 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180551 kubelet[2601]: E0625 16:17:11.175068 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180551 kubelet[2601]: W0625 16:17:11.175073 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180551 kubelet[2601]: E0625 16:17:11.175082 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180551 kubelet[2601]: E0625 16:17:11.175232 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180551 kubelet[2601]: W0625 16:17:11.175237 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180741 kubelet[2601]: E0625 16:17:11.175248 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180741 kubelet[2601]: E0625 16:17:11.175359 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180741 kubelet[2601]: W0625 16:17:11.175363 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180741 kubelet[2601]: E0625 16:17:11.175375 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180741 kubelet[2601]: E0625 16:17:11.175457 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180741 kubelet[2601]: W0625 16:17:11.175461 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180741 kubelet[2601]: E0625 16:17:11.175474 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180741 kubelet[2601]: E0625 16:17:11.175569 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180741 kubelet[2601]: W0625 16:17:11.175573 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180741 kubelet[2601]: E0625 16:17:11.175582 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180927 kubelet[2601]: E0625 16:17:11.175879 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180927 kubelet[2601]: W0625 16:17:11.175884 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180927 kubelet[2601]: E0625 16:17:11.175925 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180927 kubelet[2601]: E0625 16:17:11.176127 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180927 kubelet[2601]: W0625 16:17:11.176132 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180927 kubelet[2601]: E0625 16:17:11.176139 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.180927 kubelet[2601]: E0625 16:17:11.178660 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:17:11.180927 kubelet[2601]: W0625 16:17:11.178679 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:17:11.180927 kubelet[2601]: E0625 16:17:11.178794 2601 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:17:11.877544 containerd[1445]: time="2024-06-25T16:17:11.877516251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:11.878239 containerd[1445]: time="2024-06-25T16:17:11.878207218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:17:11.878477 containerd[1445]: time="2024-06-25T16:17:11.878460839Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:11.880631 containerd[1445]: time="2024-06-25T16:17:11.880603179Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:11.881572 containerd[1445]: time="2024-06-25T16:17:11.881555984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:11.882292 containerd[1445]: time="2024-06-25T16:17:11.882273160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.437444479s" Jun 25 16:17:11.882326 containerd[1445]: time="2024-06-25T16:17:11.882295300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:17:11.884903 containerd[1445]: time="2024-06-25T16:17:11.884235908Z" level=info msg="CreateContainer within sandbox \"6e61bf9c1738d7101a94512660f66109d146d221f2b4114598a6fdb5f699606f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:17:11.904973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772647881.mount: Deactivated successfully. Jun 25 16:17:11.919962 containerd[1445]: time="2024-06-25T16:17:11.919939338Z" level=info msg="CreateContainer within sandbox \"6e61bf9c1738d7101a94512660f66109d146d221f2b4114598a6fdb5f699606f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e640113347da05da5226358ebd954f6e45ef2db33b6365c03270aa58df5d0f1b\"" Jun 25 16:17:11.920286 containerd[1445]: time="2024-06-25T16:17:11.920273242Z" level=info msg="StartContainer for \"e640113347da05da5226358ebd954f6e45ef2db33b6365c03270aa58df5d0f1b\"" Jun 25 16:17:11.963049 containerd[1445]: time="2024-06-25T16:17:11.963024685Z" level=info msg="StartContainer for \"e640113347da05da5226358ebd954f6e45ef2db33b6365c03270aa58df5d0f1b\" returns successfully" Jun 25 16:17:12.002369 kubelet[2601]: E0625 16:17:12.002234 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdrw2" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" Jun 25 16:17:12.090504 kubelet[2601]: I0625 16:17:12.090483 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-65464cb47d-rr2wb" podStartSLOduration=2.830990652 podCreationTimestamp="2024-06-25 16:17:07 +0000 UTC" firstStartedPulling="2024-06-25 16:17:08.183687514 +0000 UTC m=+20.300404362" lastFinishedPulling="2024-06-25 16:17:10.443156059 +0000 UTC m=+22.559872907" observedRunningTime="2024-06-25 16:17:11.10205404 +0000 UTC m=+23.218770897" watchObservedRunningTime="2024-06-25 16:17:12.090459197 +0000 UTC m=+24.207176046" Jun 25 16:17:12.107999 kubelet[2601]: I0625 16:17:12.107975 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:17:12.382265 containerd[1445]: time="2024-06-25T16:17:12.349196354Z" level=info msg="shim disconnected" id=e640113347da05da5226358ebd954f6e45ef2db33b6365c03270aa58df5d0f1b namespace=k8s.io Jun 25 16:17:12.382389 containerd[1445]: time="2024-06-25T16:17:12.382283895Z" level=warning msg="cleaning up after shim disconnected" id=e640113347da05da5226358ebd954f6e45ef2db33b6365c03270aa58df5d0f1b namespace=k8s.io Jun 25 16:17:12.382389 containerd[1445]: time="2024-06-25T16:17:12.382299271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:17:12.447807 systemd[1]: run-containerd-runc-k8s.io-e640113347da05da5226358ebd954f6e45ef2db33b6365c03270aa58df5d0f1b-runc.PX05JJ.mount: Deactivated successfully. Jun 25 16:17:12.447904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e640113347da05da5226358ebd954f6e45ef2db33b6365c03270aa58df5d0f1b-rootfs.mount: Deactivated successfully. Jun 25 16:17:13.085951 containerd[1445]: time="2024-06-25T16:17:13.085920599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:17:14.003237 kubelet[2601]: E0625 16:17:14.003213 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdrw2" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" Jun 25 16:17:16.002980 kubelet[2601]: E0625 16:17:16.002527 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdrw2" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" Jun 25 16:17:16.346182 containerd[1445]: time="2024-06-25T16:17:16.345988041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:16.346774 containerd[1445]: time="2024-06-25T16:17:16.346747746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:17:16.347871 containerd[1445]: time="2024-06-25T16:17:16.347855067Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:16.349738 containerd[1445]: time="2024-06-25T16:17:16.349723765Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:16.350734 containerd[1445]: time="2024-06-25T16:17:16.350720537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:16.351019 containerd[1445]: time="2024-06-25T16:17:16.350991458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 3.265040401s" Jun 25 16:17:16.351053 containerd[1445]: time="2024-06-25T16:17:16.351023148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:17:16.353337 containerd[1445]: time="2024-06-25T16:17:16.353312671Z" level=info msg="CreateContainer within sandbox \"6e61bf9c1738d7101a94512660f66109d146d221f2b4114598a6fdb5f699606f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:17:16.364536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042737454.mount: Deactivated successfully. Jun 25 16:17:16.366695 containerd[1445]: time="2024-06-25T16:17:16.366671105Z" level=info msg="CreateContainer within sandbox \"6e61bf9c1738d7101a94512660f66109d146d221f2b4114598a6fdb5f699606f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6a540f6432a2673bcc8daec89b877c6057c5b08b67fc19ffa6ba369c2b148798\"" Jun 25 16:17:16.367987 containerd[1445]: time="2024-06-25T16:17:16.367957317Z" level=info msg="StartContainer for \"6a540f6432a2673bcc8daec89b877c6057c5b08b67fc19ffa6ba369c2b148798\"" Jun 25 16:17:16.433982 containerd[1445]: time="2024-06-25T16:17:16.433955851Z" level=info msg="StartContainer for \"6a540f6432a2673bcc8daec89b877c6057c5b08b67fc19ffa6ba369c2b148798\" returns successfully" Jun 25 16:17:17.363243 systemd[1]: run-containerd-runc-k8s.io-6a540f6432a2673bcc8daec89b877c6057c5b08b67fc19ffa6ba369c2b148798-runc.aKGCWh.mount: Deactivated successfully. Jun 25 16:17:17.735598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a540f6432a2673bcc8daec89b877c6057c5b08b67fc19ffa6ba369c2b148798-rootfs.mount: Deactivated successfully. Jun 25 16:17:17.744222 containerd[1445]: time="2024-06-25T16:17:17.739756519Z" level=info msg="shim disconnected" id=6a540f6432a2673bcc8daec89b877c6057c5b08b67fc19ffa6ba369c2b148798 namespace=k8s.io Jun 25 16:17:17.744222 containerd[1445]: time="2024-06-25T16:17:17.743920824Z" level=warning msg="cleaning up after shim disconnected" id=6a540f6432a2673bcc8daec89b877c6057c5b08b67fc19ffa6ba369c2b148798 namespace=k8s.io Jun 25 16:17:17.744222 containerd[1445]: time="2024-06-25T16:17:17.743929858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:17:17.745199 kubelet[2601]: I0625 16:17:17.744744 2601 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:17:17.779633 kubelet[2601]: I0625 16:17:17.778100 2601 topology_manager.go:215] "Topology Admit Handler" podUID="9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c" podNamespace="kube-system" podName="coredns-5dd5756b68-9zfw8" Jun 25 16:17:17.782047 kubelet[2601]: I0625 16:17:17.780199 2601 topology_manager.go:215] "Topology Admit Handler" podUID="087c036b-d5d3-4f8e-b3ce-55f1030399f4" podNamespace="calico-system" podName="calico-kube-controllers-6d8874f6d7-8dx8x" Jun 25 16:17:17.782047 kubelet[2601]: I0625 16:17:17.780561 2601 topology_manager.go:215] "Topology Admit Handler" podUID="0a650a4d-de28-477d-828b-814dd78885cf" podNamespace="kube-system" podName="coredns-5dd5756b68-xqfmx" Jun 25 16:17:17.919246 kubelet[2601]: I0625 16:17:17.919212 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a650a4d-de28-477d-828b-814dd78885cf-config-volume\") pod \"coredns-5dd5756b68-xqfmx\" (UID: \"0a650a4d-de28-477d-828b-814dd78885cf\") " pod="kube-system/coredns-5dd5756b68-xqfmx" Jun 25 16:17:17.919394 kubelet[2601]: I0625 16:17:17.919384 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsc6v\" (UniqueName: \"kubernetes.io/projected/9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c-kube-api-access-nsc6v\") pod \"coredns-5dd5756b68-9zfw8\" (UID: \"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c\") " pod="kube-system/coredns-5dd5756b68-9zfw8" Jun 25 16:17:17.919473 kubelet[2601]: I0625 16:17:17.919467 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sncpg\" (UniqueName: \"kubernetes.io/projected/087c036b-d5d3-4f8e-b3ce-55f1030399f4-kube-api-access-sncpg\") pod \"calico-kube-controllers-6d8874f6d7-8dx8x\" (UID: \"087c036b-d5d3-4f8e-b3ce-55f1030399f4\") " pod="calico-system/calico-kube-controllers-6d8874f6d7-8dx8x" Jun 25 16:17:17.919556 kubelet[2601]: I0625 16:17:17.919551 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgv2l\" (UniqueName: \"kubernetes.io/projected/0a650a4d-de28-477d-828b-814dd78885cf-kube-api-access-wgv2l\") pod \"coredns-5dd5756b68-xqfmx\" (UID: \"0a650a4d-de28-477d-828b-814dd78885cf\") " pod="kube-system/coredns-5dd5756b68-xqfmx" Jun 25 16:17:17.919656 kubelet[2601]: I0625 16:17:17.919650 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c-config-volume\") pod \"coredns-5dd5756b68-9zfw8\" (UID: \"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c\") " pod="kube-system/coredns-5dd5756b68-9zfw8" Jun 25 16:17:17.919726 kubelet[2601]: I0625 16:17:17.919721 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/087c036b-d5d3-4f8e-b3ce-55f1030399f4-tigera-ca-bundle\") pod \"calico-kube-controllers-6d8874f6d7-8dx8x\" (UID: \"087c036b-d5d3-4f8e-b3ce-55f1030399f4\") " pod="calico-system/calico-kube-controllers-6d8874f6d7-8dx8x" Jun 25 16:17:18.004981 containerd[1445]: time="2024-06-25T16:17:18.004567340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdrw2,Uid:18093c76-756e-42a9-853b-9dc1cb0c45f6,Namespace:calico-system,Attempt:0,}" Jun 25 16:17:18.090503 containerd[1445]: time="2024-06-25T16:17:18.090479079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xqfmx,Uid:0a650a4d-de28-477d-828b-814dd78885cf,Namespace:kube-system,Attempt:0,}" Jun 25 16:17:18.092216 containerd[1445]: time="2024-06-25T16:17:18.092190252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8874f6d7-8dx8x,Uid:087c036b-d5d3-4f8e-b3ce-55f1030399f4,Namespace:calico-system,Attempt:0,}" Jun 25 16:17:18.092417 containerd[1445]: time="2024-06-25T16:17:18.092350559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9zfw8,Uid:9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c,Namespace:kube-system,Attempt:0,}" Jun 25 16:17:18.103941 containerd[1445]: time="2024-06-25T16:17:18.103664727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:17:18.258870 containerd[1445]: time="2024-06-25T16:17:18.258627558Z" level=error msg="Failed to destroy network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.259077 containerd[1445]: time="2024-06-25T16:17:18.259053365Z" level=error msg="encountered an error cleaning up failed sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.262426 containerd[1445]: time="2024-06-25T16:17:18.262392028Z" level=error msg="Failed to destroy network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.262673 containerd[1445]: time="2024-06-25T16:17:18.262651220Z" level=error msg="encountered an error cleaning up failed sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.267696 containerd[1445]: time="2024-06-25T16:17:18.267656411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xqfmx,Uid:0a650a4d-de28-477d-828b-814dd78885cf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.269303 kubelet[2601]: E0625 16:17:18.269074 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.269303 kubelet[2601]: E0625 16:17:18.269117 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xqfmx" Jun 25 16:17:18.269303 kubelet[2601]: E0625 16:17:18.269132 2601 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xqfmx" Jun 25 16:17:18.269422 kubelet[2601]: E0625 16:17:18.269168 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-xqfmx_kube-system(0a650a4d-de28-477d-828b-814dd78885cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-xqfmx_kube-system(0a650a4d-de28-477d-828b-814dd78885cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xqfmx" podUID="0a650a4d-de28-477d-828b-814dd78885cf" Jun 25 16:17:18.280002 containerd[1445]: time="2024-06-25T16:17:18.279956711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9zfw8,Uid:9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.280443 kubelet[2601]: E0625 16:17:18.280209 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.280443 kubelet[2601]: E0625 16:17:18.280246 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-9zfw8" Jun 25 16:17:18.280443 kubelet[2601]: E0625 16:17:18.280262 2601 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-9zfw8" Jun 25 16:17:18.280551 kubelet[2601]: E0625 16:17:18.280298 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-9zfw8_kube-system(9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-9zfw8_kube-system(9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-9zfw8" podUID="9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c" Jun 25 16:17:18.288915 containerd[1445]: time="2024-06-25T16:17:18.288874674Z" level=error msg="Failed to destroy network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.289138 containerd[1445]: time="2024-06-25T16:17:18.289117088Z" level=error msg="encountered an error cleaning up failed sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.289186 containerd[1445]: time="2024-06-25T16:17:18.289149768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdrw2,Uid:18093c76-756e-42a9-853b-9dc1cb0c45f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.289546 kubelet[2601]: E0625 16:17:18.289325 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.289546 kubelet[2601]: E0625 16:17:18.289358 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdrw2" Jun 25 16:17:18.289546 kubelet[2601]: E0625 16:17:18.289371 2601 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdrw2" Jun 25 16:17:18.289640 kubelet[2601]: E0625 16:17:18.289412 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sdrw2_calico-system(18093c76-756e-42a9-853b-9dc1cb0c45f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sdrw2_calico-system(18093c76-756e-42a9-853b-9dc1cb0c45f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdrw2" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" Jun 25 16:17:18.290572 containerd[1445]: time="2024-06-25T16:17:18.290541870Z" level=error msg="Failed to destroy network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.290932 containerd[1445]: time="2024-06-25T16:17:18.290838425Z" level=error msg="encountered an error cleaning up failed sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.291274 containerd[1445]: time="2024-06-25T16:17:18.291251815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8874f6d7-8dx8x,Uid:087c036b-d5d3-4f8e-b3ce-55f1030399f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.291383 kubelet[2601]: E0625 16:17:18.291371 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:18.291421 kubelet[2601]: E0625 16:17:18.291402 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d8874f6d7-8dx8x" Jun 25 16:17:18.291421 kubelet[2601]: E0625 16:17:18.291415 2601 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d8874f6d7-8dx8x" Jun 25 16:17:18.291480 kubelet[2601]: E0625 16:17:18.291454 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d8874f6d7-8dx8x_calico-system(087c036b-d5d3-4f8e-b3ce-55f1030399f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d8874f6d7-8dx8x_calico-system(087c036b-d5d3-4f8e-b3ce-55f1030399f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d8874f6d7-8dx8x" podUID="087c036b-d5d3-4f8e-b3ce-55f1030399f4" Jun 25 16:17:18.364557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866-shm.mount: Deactivated successfully. Jun 25 16:17:19.105287 kubelet[2601]: I0625 16:17:19.104883 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:19.108513 kubelet[2601]: I0625 16:17:19.108497 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:19.117528 kubelet[2601]: I0625 16:17:19.117366 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:19.119545 kubelet[2601]: I0625 16:17:19.119524 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:19.123745 containerd[1445]: time="2024-06-25T16:17:19.123711760Z" level=info msg="StopPodSandbox for \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\"" Jun 25 16:17:19.124461 containerd[1445]: time="2024-06-25T16:17:19.124433967Z" level=info msg="StopPodSandbox for \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\"" Jun 25 16:17:19.126414 containerd[1445]: time="2024-06-25T16:17:19.126394410Z" level=info msg="Ensure that sandbox 3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb in task-service has been cleanup successfully" Jun 25 16:17:19.127754 containerd[1445]: time="2024-06-25T16:17:19.127444539Z" level=info msg="Ensure that sandbox 26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506 in task-service has been cleanup successfully" Jun 25 16:17:19.127754 containerd[1445]: time="2024-06-25T16:17:19.127570400Z" level=info msg="StopPodSandbox for \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\"" Jun 25 16:17:19.127754 containerd[1445]: time="2024-06-25T16:17:19.127671252Z" level=info msg="Ensure that sandbox abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37 in task-service has been cleanup successfully" Jun 25 16:17:19.127869 containerd[1445]: time="2024-06-25T16:17:19.127810469Z" level=info msg="StopPodSandbox for \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\"" Jun 25 16:17:19.127922 containerd[1445]: time="2024-06-25T16:17:19.127900760Z" level=info msg="Ensure that sandbox 447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866 in task-service has been cleanup successfully" Jun 25 16:17:19.162463 containerd[1445]: time="2024-06-25T16:17:19.162419624Z" level=error msg="StopPodSandbox for \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\" failed" error="failed to destroy network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:19.162796 kubelet[2601]: E0625 16:17:19.162779 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:19.162870 kubelet[2601]: E0625 16:17:19.162838 2601 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506"} Jun 25 16:17:19.162870 kubelet[2601]: E0625 16:17:19.162860 2601 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"087c036b-d5d3-4f8e-b3ce-55f1030399f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:17:19.162978 kubelet[2601]: E0625 16:17:19.162885 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"087c036b-d5d3-4f8e-b3ce-55f1030399f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d8874f6d7-8dx8x" podUID="087c036b-d5d3-4f8e-b3ce-55f1030399f4" Jun 25 16:17:19.166444 containerd[1445]: time="2024-06-25T16:17:19.166407512Z" level=error msg="StopPodSandbox for \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\" failed" error="failed to destroy network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:19.166816 kubelet[2601]: E0625 16:17:19.166696 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:19.166816 kubelet[2601]: E0625 16:17:19.166726 2601 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb"} Jun 25 16:17:19.166816 kubelet[2601]: E0625 16:17:19.166758 2601 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:17:19.166816 kubelet[2601]: E0625 16:17:19.166794 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-9zfw8" podUID="9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c" Jun 25 16:17:19.178589 containerd[1445]: time="2024-06-25T16:17:19.178548266Z" level=error msg="StopPodSandbox for \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\" failed" error="failed to destroy network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:19.178968 kubelet[2601]: E0625 16:17:19.178754 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:19.178968 kubelet[2601]: E0625 16:17:19.178788 2601 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37"} Jun 25 16:17:19.178968 kubelet[2601]: E0625 16:17:19.178824 2601 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a650a4d-de28-477d-828b-814dd78885cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:17:19.178968 kubelet[2601]: E0625 16:17:19.178852 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a650a4d-de28-477d-828b-814dd78885cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xqfmx" podUID="0a650a4d-de28-477d-828b-814dd78885cf" Jun 25 16:17:19.184215 containerd[1445]: time="2024-06-25T16:17:19.184178543Z" level=error msg="StopPodSandbox for \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\" failed" error="failed to destroy network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:17:19.184507 kubelet[2601]: E0625 16:17:19.184489 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:19.184569 kubelet[2601]: E0625 16:17:19.184514 2601 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866"} Jun 25 16:17:19.184569 kubelet[2601]: E0625 16:17:19.184538 2601 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18093c76-756e-42a9-853b-9dc1cb0c45f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:17:19.184569 kubelet[2601]: E0625 16:17:19.184564 2601 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18093c76-756e-42a9-853b-9dc1cb0c45f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdrw2" podUID="18093c76-756e-42a9-853b-9dc1cb0c45f6" Jun 25 16:17:21.801309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221380205.mount: Deactivated successfully. Jun 25 16:17:21.988539 containerd[1445]: time="2024-06-25T16:17:21.980913513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:21.989445 containerd[1445]: time="2024-06-25T16:17:21.989417407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:17:21.997474 containerd[1445]: time="2024-06-25T16:17:21.997451966Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:22.008874 containerd[1445]: time="2024-06-25T16:17:22.008779212Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:22.016053 containerd[1445]: time="2024-06-25T16:17:22.016037929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:22.016320 containerd[1445]: time="2024-06-25T16:17:22.016304849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 3.912614645s" Jun 25 16:17:22.016373 containerd[1445]: time="2024-06-25T16:17:22.016363497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:17:22.168754 containerd[1445]: time="2024-06-25T16:17:22.168726955Z" level=info msg="CreateContainer within sandbox \"6e61bf9c1738d7101a94512660f66109d146d221f2b4114598a6fdb5f699606f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:17:22.185280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479977612.mount: Deactivated successfully. Jun 25 16:17:22.205464 containerd[1445]: time="2024-06-25T16:17:22.205433424Z" level=info msg="CreateContainer within sandbox \"6e61bf9c1738d7101a94512660f66109d146d221f2b4114598a6fdb5f699606f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34\"" Jun 25 16:17:22.217965 containerd[1445]: time="2024-06-25T16:17:22.217738121Z" level=info msg="StartContainer for \"c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34\"" Jun 25 16:17:22.282357 containerd[1445]: time="2024-06-25T16:17:22.282334654Z" level=info msg="StartContainer for \"c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34\" returns successfully" Jun 25 16:17:22.361942 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:17:22.362016 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:17:23.196230 kubelet[2601]: I0625 16:17:23.196209 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9q7hm" podStartSLOduration=2.8630071790000002 podCreationTimestamp="2024-06-25 16:17:07 +0000 UTC" firstStartedPulling="2024-06-25 16:17:08.725836089 +0000 UTC m=+20.842552936" lastFinishedPulling="2024-06-25 16:17:22.01667614 +0000 UTC m=+34.133392989" observedRunningTime="2024-06-25 16:17:23.148079211 +0000 UTC m=+35.264796085" watchObservedRunningTime="2024-06-25 16:17:23.153847232 +0000 UTC m=+35.270564083" Jun 25 16:17:23.210090 systemd[1]: run-containerd-runc-k8s.io-c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34-runc.yqje7b.mount: Deactivated successfully. Jun 25 16:17:23.608000 audit[3733]: AVC avc: denied { write } for pid=3733 comm="tee" name="fd" dev="proc" ino=31778 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.612988 kernel: audit: type=1400 audit(1719332243.608:286): avc: denied { write } for pid=3733 comm="tee" name="fd" dev="proc" ino=31778 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.614393 kernel: audit: type=1300 audit(1719332243.608:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcee868a32 a2=241 a3=1b6 items=1 ppid=3693 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.614419 kernel: audit: type=1307 audit(1719332243.608:286): cwd="/etc/service/enabled/felix/log" Jun 25 16:17:23.608000 audit[3733]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcee868a32 a2=241 a3=1b6 items=1 ppid=3693 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.618042 kernel: audit: type=1302 audit(1719332243.608:286): item=0 name="/dev/fd/63" inode=30963 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.618066 kernel: audit: type=1327 audit(1719332243.608:286): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.608000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:17:23.608000 audit: PATH item=0 name="/dev/fd/63" inode=30963 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.625000 audit[3730]: AVC avc: denied { write } for pid=3730 comm="tee" name="fd" dev="proc" ino=30988 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.628629 kernel: audit: type=1400 audit(1719332243.625:287): avc: denied { write } for pid=3730 comm="tee" name="fd" dev="proc" ino=30988 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.625000 audit[3730]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd16975a33 a2=241 a3=1b6 items=1 ppid=3702 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.636628 kernel: audit: type=1300 audit(1719332243.625:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd16975a33 a2=241 a3=1b6 items=1 ppid=3702 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.625000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:17:23.643265 kernel: audit: type=1307 audit(1719332243.625:287): cwd="/etc/service/enabled/bird/log" Jun 25 16:17:23.643302 kernel: audit: type=1302 audit(1719332243.625:287): item=0 name="/dev/fd/63" inode=30962 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.625000 audit: PATH item=0 name="/dev/fd/63" inode=30962 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.625000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.633000 audit[3748]: AVC avc: denied { write } for pid=3748 comm="tee" name="fd" dev="proc" ino=31003 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.633000 audit[3748]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3fa8fa32 a2=241 a3=1b6 items=1 ppid=3705 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.648664 kernel: audit: type=1327 audit(1719332243.625:287): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.633000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:17:23.633000 audit: PATH item=0 name="/dev/fd/63" inode=30976 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.637000 audit[3752]: AVC avc: denied { write } for pid=3752 comm="tee" name="fd" dev="proc" ino=31009 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.637000 audit[3752]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc09725a22 a2=241 a3=1b6 items=1 ppid=3695 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.637000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:17:23.637000 audit: PATH item=0 name="/dev/fd/63" inode=30985 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.646000 audit[3754]: AVC avc: denied { write } for pid=3754 comm="tee" name="fd" dev="proc" ino=31013 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.646000 audit[3754]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffda23c6a32 a2=241 a3=1b6 items=1 ppid=3699 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.646000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:17:23.646000 audit: PATH item=0 name="/dev/fd/63" inode=30992 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.680000 audit[3765]: AVC avc: denied { write } for pid=3765 comm="tee" name="fd" dev="proc" ino=31019 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.680000 audit[3765]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeae44ba23 a2=241 a3=1b6 items=1 ppid=3694 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.680000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:17:23.680000 audit: PATH item=0 name="/dev/fd/63" inode=31786 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:23.682000 audit[3767]: AVC avc: denied { write } for pid=3767 comm="tee" name="fd" dev="proc" ino=31792 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:17:23.682000 audit[3767]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff99ae3a34 a2=241 a3=1b6 items=1 ppid=3697 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:23.682000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:17:23.682000 audit: PATH item=0 name="/dev/fd/63" inode=31789 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:17:23.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:17:24.159825 systemd[1]: run-containerd-runc-k8s.io-c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34-runc.gjfgJ5.mount: Deactivated successfully. Jun 25 16:17:25.174044 systemd[1]: run-containerd-runc-k8s.io-c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34-runc.buRQzJ.mount: Deactivated successfully. Jun 25 16:17:33.002739 containerd[1445]: time="2024-06-25T16:17:33.002658785Z" level=info msg="StopPodSandbox for \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\"" Jun 25 16:17:33.003959 containerd[1445]: time="2024-06-25T16:17:33.002906610Z" level=info msg="StopPodSandbox for \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\"" Jun 25 16:17:33.012186 containerd[1445]: time="2024-06-25T16:17:33.002921458Z" level=info msg="StopPodSandbox for \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\"" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.165 [INFO][4042] k8s.go 608: Cleaning up netns ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.165 [INFO][4042] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" iface="eth0" netns="/var/run/netns/cni-74eaad6c-2a66-c05a-9928-5575352b7983" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.166 [INFO][4042] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" iface="eth0" netns="/var/run/netns/cni-74eaad6c-2a66-c05a-9928-5575352b7983" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.166 [INFO][4042] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" iface="eth0" netns="/var/run/netns/cni-74eaad6c-2a66-c05a-9928-5575352b7983" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.166 [INFO][4042] k8s.go 615: Releasing IP address(es) ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.166 [INFO][4042] utils.go 188: Calico CNI releasing IP address ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.365 [INFO][4070] ipam_plugin.go 411: Releasing address using handleID ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.366 [INFO][4070] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.367 [INFO][4070] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.376 [WARNING][4070] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.376 [INFO][4070] ipam_plugin.go 439: Releasing address using workloadID ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.377 [INFO][4070] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:33.381566 containerd[1445]: 2024-06-25 16:17:33.378 [INFO][4042] k8s.go 621: Teardown processing complete. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:33.381396 systemd[1]: run-netns-cni\x2d74eaad6c\x2d2a66\x2dc05a\x2d9928\x2d5575352b7983.mount: Deactivated successfully. Jun 25 16:17:33.382929 containerd[1445]: time="2024-06-25T16:17:33.382331189Z" level=info msg="TearDown network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\" successfully" Jun 25 16:17:33.382929 containerd[1445]: time="2024-06-25T16:17:33.382373780Z" level=info msg="StopPodSandbox for \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\" returns successfully" Jun 25 16:17:33.382977 containerd[1445]: time="2024-06-25T16:17:33.382925243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8874f6d7-8dx8x,Uid:087c036b-d5d3-4f8e-b3ce-55f1030399f4,Namespace:calico-system,Attempt:1,}" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.138 [INFO][4035] k8s.go 608: Cleaning up netns ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.148 [INFO][4035] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" iface="eth0" netns="/var/run/netns/cni-3b7ed59c-464e-a0c7-729e-50ed758ba518" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.150 [INFO][4035] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" iface="eth0" netns="/var/run/netns/cni-3b7ed59c-464e-a0c7-729e-50ed758ba518" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.161 [INFO][4035] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" iface="eth0" netns="/var/run/netns/cni-3b7ed59c-464e-a0c7-729e-50ed758ba518" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.161 [INFO][4035] k8s.go 615: Releasing IP address(es) ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.162 [INFO][4035] utils.go 188: Calico CNI releasing IP address ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.365 [INFO][4069] ipam_plugin.go 411: Releasing address using handleID ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.366 [INFO][4069] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.377 [INFO][4069] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.383 [WARNING][4069] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.383 [INFO][4069] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.384 [INFO][4069] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:33.389050 containerd[1445]: 2024-06-25 16:17:33.386 [INFO][4035] k8s.go 621: Teardown processing complete. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:33.395462 containerd[1445]: time="2024-06-25T16:17:33.394033863Z" level=info msg="TearDown network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\" successfully" Jun 25 16:17:33.395462 containerd[1445]: time="2024-06-25T16:17:33.394068103Z" level=info msg="StopPodSandbox for \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\" returns successfully" Jun 25 16:17:33.395462 containerd[1445]: time="2024-06-25T16:17:33.394435050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9zfw8,Uid:9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c,Namespace:kube-system,Attempt:1,}" Jun 25 16:17:33.393250 systemd[1]: run-netns-cni\x2d3b7ed59c\x2d464e\x2da0c7\x2d729e\x2d50ed758ba518.mount: Deactivated successfully. Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.147 [INFO][4033] k8s.go 608: Cleaning up netns ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.149 [INFO][4033] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" iface="eth0" netns="/var/run/netns/cni-6aedd58c-cff4-a68f-0050-6f072ac2659f" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.150 [INFO][4033] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" iface="eth0" netns="/var/run/netns/cni-6aedd58c-cff4-a68f-0050-6f072ac2659f" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.161 [INFO][4033] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" iface="eth0" netns="/var/run/netns/cni-6aedd58c-cff4-a68f-0050-6f072ac2659f" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.161 [INFO][4033] k8s.go 615: Releasing IP address(es) ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.162 [INFO][4033] utils.go 188: Calico CNI releasing IP address ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.365 [INFO][4068] ipam_plugin.go 411: Releasing address using handleID ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.366 [INFO][4068] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.384 [INFO][4068] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.389 [WARNING][4068] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.389 [INFO][4068] ipam_plugin.go 439: Releasing address using workloadID ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.391 [INFO][4068] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:33.397687 containerd[1445]: 2024-06-25 16:17:33.396 [INFO][4033] k8s.go 621: Teardown processing complete. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:33.401375 containerd[1445]: time="2024-06-25T16:17:33.400107881Z" level=info msg="TearDown network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\" successfully" Jun 25 16:17:33.401375 containerd[1445]: time="2024-06-25T16:17:33.400147788Z" level=info msg="StopPodSandbox for \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\" returns successfully" Jun 25 16:17:33.401375 containerd[1445]: time="2024-06-25T16:17:33.400456035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xqfmx,Uid:0a650a4d-de28-477d-828b-814dd78885cf,Namespace:kube-system,Attempt:1,}" Jun 25 16:17:33.399440 systemd[1]: run-netns-cni\x2d6aedd58c\x2dcff4\x2da68f\x2d0050\x2d6f072ac2659f.mount: Deactivated successfully. Jun 25 16:17:33.539698 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:17:33.540365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califbfcb265d93: link becomes ready Jun 25 16:17:33.553012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali094c514e434: link becomes ready Jun 25 16:17:33.551941 systemd-networkd[1209]: califbfcb265d93: Link UP Jun 25 16:17:33.552034 systemd-networkd[1209]: califbfcb265d93: Gained carrier Jun 25 16:17:33.552328 systemd-networkd[1209]: cali094c514e434: Link UP Jun 25 16:17:33.552417 systemd-networkd[1209]: cali094c514e434: Gained carrier Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.449 [INFO][4086] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.462 [INFO][4086] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--9zfw8-eth0 coredns-5dd5756b68- kube-system 9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c 666 0 2024-06-25 16:17:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-9zfw8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califbfcb265d93 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.462 [INFO][4086] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.484 [INFO][4125] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" HandleID="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.492 [INFO][4125] ipam_plugin.go 264: Auto assigning IP ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" HandleID="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001149b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-9zfw8", "timestamp":"2024-06-25 16:17:33.484554029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.493 [INFO][4125] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.493 [INFO][4125] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.493 [INFO][4125] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.496 [INFO][4125] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.502 [INFO][4125] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.504 [INFO][4125] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.505 [INFO][4125] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.507 [INFO][4125] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.507 [INFO][4125] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.508 [INFO][4125] ipam.go 1685: Creating new handle: k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84 Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.512 [INFO][4125] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.518 [INFO][4125] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.518 [INFO][4125] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" host="localhost" Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.518 [INFO][4125] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:33.557521 containerd[1445]: 2024-06-25 16:17:33.518 [INFO][4125] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" HandleID="k8s-pod-network.f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.560612 containerd[1445]: 2024-06-25 16:17:33.520 [INFO][4086] k8s.go 386: Populated endpoint ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--9zfw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-9zfw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbfcb265d93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:33.560612 containerd[1445]: 2024-06-25 16:17:33.520 [INFO][4086] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.560612 containerd[1445]: 2024-06-25 16:17:33.520 [INFO][4086] dataplane_linux.go 68: Setting the host side veth name to califbfcb265d93 ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.560612 containerd[1445]: 2024-06-25 16:17:33.539 [INFO][4086] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.560612 containerd[1445]: 2024-06-25 16:17:33.542 [INFO][4086] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--9zfw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84", Pod:"coredns-5dd5756b68-9zfw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbfcb265d93", MAC:"3e:d3:18:5c:06:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:33.560612 containerd[1445]: 2024-06-25 16:17:33.552 [INFO][4086] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84" Namespace="kube-system" Pod="coredns-5dd5756b68-9zfw8" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.451 [INFO][4091] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.459 [INFO][4091] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0 calico-kube-controllers-6d8874f6d7- calico-system 087c036b-d5d3-4f8e-b3ce-55f1030399f4 668 0 2024-06-25 16:17:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d8874f6d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d8874f6d7-8dx8x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali094c514e434 [] []}} ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.459 [INFO][4091] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.493 [INFO][4123] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" HandleID="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.500 [INFO][4123] ipam_plugin.go 264: Auto assigning IP ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" HandleID="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d8874f6d7-8dx8x", "timestamp":"2024-06-25 16:17:33.493810973 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.500 [INFO][4123] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.518 [INFO][4123] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.518 [INFO][4123] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.520 [INFO][4123] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.523 [INFO][4123] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.526 [INFO][4123] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.530 [INFO][4123] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.532 [INFO][4123] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.532 [INFO][4123] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.534 [INFO][4123] ipam.go 1685: Creating new handle: k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.539 [INFO][4123] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.546 [INFO][4123] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.546 [INFO][4123] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" host="localhost" Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.546 [INFO][4123] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:33.561498 containerd[1445]: 2024-06-25 16:17:33.546 [INFO][4123] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" HandleID="k8s-pod-network.aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.562796 containerd[1445]: 2024-06-25 16:17:33.549 [INFO][4091] k8s.go 386: Populated endpoint ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0", GenerateName:"calico-kube-controllers-6d8874f6d7-", Namespace:"calico-system", SelfLink:"", UID:"087c036b-d5d3-4f8e-b3ce-55f1030399f4", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8874f6d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d8874f6d7-8dx8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali094c514e434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:33.562796 containerd[1445]: 2024-06-25 16:17:33.549 [INFO][4091] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.562796 containerd[1445]: 2024-06-25 16:17:33.549 [INFO][4091] dataplane_linux.go 68: Setting the host side veth name to cali094c514e434 ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.562796 containerd[1445]: 2024-06-25 16:17:33.551 [INFO][4091] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.562796 containerd[1445]: 2024-06-25 16:17:33.553 [INFO][4091] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0", GenerateName:"calico-kube-controllers-6d8874f6d7-", Namespace:"calico-system", SelfLink:"", UID:"087c036b-d5d3-4f8e-b3ce-55f1030399f4", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8874f6d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b", Pod:"calico-kube-controllers-6d8874f6d7-8dx8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali094c514e434", MAC:"4a:ed:b3:a6:1b:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:33.562796 containerd[1445]: 2024-06-25 16:17:33.559 [INFO][4091] k8s.go 500: Wrote updated endpoint to datastore ContainerID="aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b" Namespace="calico-system" Pod="calico-kube-controllers-6d8874f6d7-8dx8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:33.590094 systemd-networkd[1209]: caliadcafb6a9f8: Link UP Jun 25 16:17:33.591417 systemd-networkd[1209]: caliadcafb6a9f8: Gained carrier Jun 25 16:17:33.591665 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliadcafb6a9f8: link becomes ready Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.476 [INFO][4114] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.483 [INFO][4114] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--xqfmx-eth0 coredns-5dd5756b68- kube-system 0a650a4d-de28-477d-828b-814dd78885cf 667 0 2024-06-25 16:17:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-xqfmx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliadcafb6a9f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.483 [INFO][4114] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.509 [INFO][4136] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" HandleID="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.517 [INFO][4136] ipam_plugin.go 264: Auto assigning IP ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" HandleID="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfe00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-xqfmx", "timestamp":"2024-06-25 16:17:33.509770757 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.517 [INFO][4136] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.546 [INFO][4136] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.547 [INFO][4136] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.557 [INFO][4136] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.566 [INFO][4136] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.572 [INFO][4136] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.573 [INFO][4136] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.575 [INFO][4136] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.575 [INFO][4136] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.576 [INFO][4136] ipam.go 1685: Creating new handle: k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.579 [INFO][4136] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.585 [INFO][4136] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.585 [INFO][4136] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" host="localhost" Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.585 [INFO][4136] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:33.601319 containerd[1445]: 2024-06-25 16:17:33.585 [INFO][4136] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" HandleID="k8s-pod-network.a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.601857 containerd[1445]: 2024-06-25 16:17:33.588 [INFO][4114] k8s.go 386: Populated endpoint ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xqfmx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a650a4d-de28-477d-828b-814dd78885cf", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-xqfmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadcafb6a9f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:33.601857 containerd[1445]: 2024-06-25 16:17:33.588 [INFO][4114] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.601857 containerd[1445]: 2024-06-25 16:17:33.588 [INFO][4114] dataplane_linux.go 68: Setting the host side veth name to caliadcafb6a9f8 ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.601857 containerd[1445]: 2024-06-25 16:17:33.592 [INFO][4114] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.601857 containerd[1445]: 2024-06-25 16:17:33.592 [INFO][4114] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xqfmx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a650a4d-de28-477d-828b-814dd78885cf", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a", Pod:"coredns-5dd5756b68-xqfmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadcafb6a9f8", MAC:"06:12:c5:61:e8:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:33.601857 containerd[1445]: 2024-06-25 16:17:33.600 [INFO][4114] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a" Namespace="kube-system" Pod="coredns-5dd5756b68-xqfmx" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:33.607385 containerd[1445]: time="2024-06-25T16:17:33.606902974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:33.607385 containerd[1445]: time="2024-06-25T16:17:33.606936562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:33.607385 containerd[1445]: time="2024-06-25T16:17:33.606949283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:33.607385 containerd[1445]: time="2024-06-25T16:17:33.606963818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:33.612329 containerd[1445]: time="2024-06-25T16:17:33.612179289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:33.612329 containerd[1445]: time="2024-06-25T16:17:33.612219123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:33.612329 containerd[1445]: time="2024-06-25T16:17:33.612232015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:33.612329 containerd[1445]: time="2024-06-25T16:17:33.612240847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:33.626075 systemd-resolved[1362]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:17:33.628455 systemd-resolved[1362]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:17:33.666014 containerd[1445]: time="2024-06-25T16:17:33.664885757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8874f6d7-8dx8x,Uid:087c036b-d5d3-4f8e-b3ce-55f1030399f4,Namespace:calico-system,Attempt:1,} returns sandbox id \"aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b\"" Jun 25 16:17:33.671461 containerd[1445]: time="2024-06-25T16:17:33.671437127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9zfw8,Uid:9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c,Namespace:kube-system,Attempt:1,} returns sandbox id \"f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84\"" Jun 25 16:17:33.672563 containerd[1445]: time="2024-06-25T16:17:33.672283981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:17:33.678881 containerd[1445]: time="2024-06-25T16:17:33.678794141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:33.678881 containerd[1445]: time="2024-06-25T16:17:33.678840873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:33.678881 containerd[1445]: time="2024-06-25T16:17:33.678852288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:33.678881 containerd[1445]: time="2024-06-25T16:17:33.678858501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:33.682588 containerd[1445]: time="2024-06-25T16:17:33.682116510Z" level=info msg="CreateContainer within sandbox \"f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:17:33.703284 systemd-resolved[1362]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:17:33.716472 containerd[1445]: time="2024-06-25T16:17:33.716447989Z" level=info msg="CreateContainer within sandbox \"f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ca84a9b76f611ad55a2e68b2c3dcc54ed0582d4db6fa02c3102425a7676843b\"" Jun 25 16:17:33.717560 containerd[1445]: time="2024-06-25T16:17:33.716718188Z" level=info msg="StartContainer for \"5ca84a9b76f611ad55a2e68b2c3dcc54ed0582d4db6fa02c3102425a7676843b\"" Jun 25 16:17:33.724666 containerd[1445]: time="2024-06-25T16:17:33.724642022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xqfmx,Uid:0a650a4d-de28-477d-828b-814dd78885cf,Namespace:kube-system,Attempt:1,} returns sandbox id \"a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a\"" Jun 25 16:17:33.727762 containerd[1445]: time="2024-06-25T16:17:33.727736752Z" level=info msg="CreateContainer within sandbox \"a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:17:33.735997 containerd[1445]: time="2024-06-25T16:17:33.735973110Z" level=info msg="CreateContainer within sandbox \"a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef5a496270a291e14da82b9761b38f9c54fd789bbabd3b274e3a26e5e94f6475\"" Jun 25 16:17:33.737508 containerd[1445]: time="2024-06-25T16:17:33.737481748Z" level=info msg="StartContainer for \"ef5a496270a291e14da82b9761b38f9c54fd789bbabd3b274e3a26e5e94f6475\"" Jun 25 16:17:33.754917 containerd[1445]: time="2024-06-25T16:17:33.754885232Z" level=info msg="StartContainer for \"5ca84a9b76f611ad55a2e68b2c3dcc54ed0582d4db6fa02c3102425a7676843b\" returns successfully" Jun 25 16:17:33.780660 containerd[1445]: time="2024-06-25T16:17:33.780591244Z" level=info msg="StartContainer for \"ef5a496270a291e14da82b9761b38f9c54fd789bbabd3b274e3a26e5e94f6475\" returns successfully" Jun 25 16:17:34.013428 containerd[1445]: time="2024-06-25T16:17:34.013393012Z" level=info msg="StopPodSandbox for \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\"" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.068 [INFO][4388] k8s.go 608: Cleaning up netns ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.068 [INFO][4388] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" iface="eth0" netns="/var/run/netns/cni-4256a6f5-1190-b858-98ef-a1cf47e9fe24" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.068 [INFO][4388] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" iface="eth0" netns="/var/run/netns/cni-4256a6f5-1190-b858-98ef-a1cf47e9fe24" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.068 [INFO][4388] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" iface="eth0" netns="/var/run/netns/cni-4256a6f5-1190-b858-98ef-a1cf47e9fe24" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.068 [INFO][4388] k8s.go 615: Releasing IP address(es) ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.068 [INFO][4388] utils.go 188: Calico CNI releasing IP address ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.090 [INFO][4400] ipam_plugin.go 411: Releasing address using handleID ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.090 [INFO][4400] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.090 [INFO][4400] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.096 [WARNING][4400] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.097 [INFO][4400] ipam_plugin.go 439: Releasing address using workloadID ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.102 [INFO][4400] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:34.105375 containerd[1445]: 2024-06-25 16:17:34.104 [INFO][4388] k8s.go 621: Teardown processing complete. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:34.105810 containerd[1445]: time="2024-06-25T16:17:34.105518890Z" level=info msg="TearDown network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\" successfully" Jun 25 16:17:34.105810 containerd[1445]: time="2024-06-25T16:17:34.105550831Z" level=info msg="StopPodSandbox for \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\" returns successfully" Jun 25 16:17:34.106079 containerd[1445]: time="2024-06-25T16:17:34.106063081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdrw2,Uid:18093c76-756e-42a9-853b-9dc1cb0c45f6,Namespace:calico-system,Attempt:1,}" Jun 25 16:17:34.209211 kubelet[2601]: I0625 16:17:34.209172 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9zfw8" podStartSLOduration=32.209135464 podCreationTimestamp="2024-06-25 16:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:17:34.208193825 +0000 UTC m=+46.324910682" watchObservedRunningTime="2024-06-25 16:17:34.209135464 +0000 UTC m=+46.325852321" Jun 25 16:17:34.216196 kubelet[2601]: I0625 16:17:34.216165 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xqfmx" podStartSLOduration=32.216140719 podCreationTimestamp="2024-06-25 16:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:17:34.215822043 +0000 UTC m=+46.332538901" watchObservedRunningTime="2024-06-25 16:17:34.216140719 +0000 UTC m=+46.332857565" Jun 25 16:17:34.247446 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1a6be56525a: link becomes ready Jun 25 16:17:34.244349 systemd-networkd[1209]: cali1a6be56525a: Link UP Jun 25 16:17:34.244732 systemd-networkd[1209]: cali1a6be56525a: Gained carrier Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.148 [INFO][4424] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.160 [INFO][4424] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sdrw2-eth0 csi-node-driver- calico-system 18093c76-756e-42a9-853b-9dc1cb0c45f6 687 0 2024-06-25 16:17:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-sdrw2 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali1a6be56525a [] []}} ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.160 [INFO][4424] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.194 [INFO][4436] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" HandleID="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.204 [INFO][4436] ipam_plugin.go 264: Auto assigning IP ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" HandleID="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sdrw2", "timestamp":"2024-06-25 16:17:34.194836495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.204 [INFO][4436] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.204 [INFO][4436] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.204 [INFO][4436] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.206 [INFO][4436] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.216 [INFO][4436] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.231 [INFO][4436] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.232 [INFO][4436] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.234 [INFO][4436] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.234 [INFO][4436] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.235 [INFO][4436] ipam.go 1685: Creating new handle: k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057 Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.237 [INFO][4436] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.240 [INFO][4436] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.240 [INFO][4436] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" host="localhost" Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.240 [INFO][4436] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:34.267746 containerd[1445]: 2024-06-25 16:17:34.240 [INFO][4436] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" HandleID="k8s-pod-network.1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.268301 containerd[1445]: 2024-06-25 16:17:34.241 [INFO][4424] k8s.go 386: Populated endpoint ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdrw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18093c76-756e-42a9-853b-9dc1cb0c45f6", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sdrw2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1a6be56525a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:34.268301 containerd[1445]: 2024-06-25 16:17:34.242 [INFO][4424] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.268301 containerd[1445]: 2024-06-25 16:17:34.242 [INFO][4424] dataplane_linux.go 68: Setting the host side veth name to cali1a6be56525a ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.268301 containerd[1445]: 2024-06-25 16:17:34.248 [INFO][4424] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.268301 containerd[1445]: 2024-06-25 16:17:34.250 [INFO][4424] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdrw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18093c76-756e-42a9-853b-9dc1cb0c45f6", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057", Pod:"csi-node-driver-sdrw2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1a6be56525a", MAC:"ce:ab:ed:d5:4c:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:34.268301 containerd[1445]: 2024-06-25 16:17:34.259 [INFO][4424] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057" Namespace="calico-system" Pod="csi-node-driver-sdrw2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:34.289056 containerd[1445]: time="2024-06-25T16:17:34.289015510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:34.289183 containerd[1445]: time="2024-06-25T16:17:34.289045256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:34.289183 containerd[1445]: time="2024-06-25T16:17:34.289061035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:34.289183 containerd[1445]: time="2024-06-25T16:17:34.289068156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:34.314000 audit[4486]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=4486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:34.318811 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:17:34.319390 kernel: audit: type=1325 audit(1719332254.314:293): table=filter:95 family=2 entries=16 op=nft_register_rule pid=4486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:34.319433 kernel: audit: type=1300 audit(1719332254.314:293): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcd5e3f360 a2=0 a3=7ffcd5e3f34c items=0 ppid=2737 pid=4486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:34.314000 audit[4486]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcd5e3f360 a2=0 a3=7ffcd5e3f34c items=0 ppid=2737 pid=4486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:34.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:34.322645 kernel: audit: type=1327 audit(1719332254.314:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:34.314000 audit[4486]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=4486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:34.326674 kernel: audit: type=1325 audit(1719332254.314:294): table=nat:96 family=2 entries=12 op=nft_register_rule pid=4486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:34.314000 audit[4486]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd5e3f360 a2=0 a3=0 items=0 ppid=2737 pid=4486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:34.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:34.330356 systemd-resolved[1362]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:17:34.331339 kernel: audit: type=1300 audit(1719332254.314:294): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd5e3f360 a2=0 a3=0 items=0 ppid=2737 pid=4486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:34.331391 kernel: audit: type=1327 audit(1719332254.314:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:34.340280 containerd[1445]: time="2024-06-25T16:17:34.340254561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdrw2,Uid:18093c76-756e-42a9-853b-9dc1cb0c45f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057\"" Jun 25 16:17:34.385719 systemd[1]: run-netns-cni\x2d4256a6f5\x2d1190\x2db858\x2d98ef\x2da1cf47e9fe24.mount: Deactivated successfully. Jun 25 16:17:34.767780 systemd-networkd[1209]: cali094c514e434: Gained IPv6LL Jun 25 16:17:35.023821 systemd-networkd[1209]: caliadcafb6a9f8: Gained IPv6LL Jun 25 16:17:35.334000 audit[4519]: NETFILTER_CFG table=filter:97 family=2 entries=13 op=nft_register_rule pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:35.334000 audit[4519]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe926d2ad0 a2=0 a3=7ffe926d2abc items=0 ppid=2737 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:35.339979 kernel: audit: type=1325 audit(1719332255.334:295): table=filter:97 family=2 entries=13 op=nft_register_rule pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:35.340025 kernel: audit: type=1300 audit(1719332255.334:295): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe926d2ad0 a2=0 a3=7ffe926d2abc items=0 ppid=2737 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:35.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:35.341265 kernel: audit: type=1327 audit(1719332255.334:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:35.341317 kernel: audit: type=1325 audit(1719332255.334:296): table=nat:98 family=2 entries=33 op=nft_register_chain pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:35.334000 audit[4519]: NETFILTER_CFG table=nat:98 family=2 entries=33 op=nft_register_chain pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:35.334000 audit[4519]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffe926d2ad0 a2=0 a3=7ffe926d2abc items=0 ppid=2737 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:35.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:35.408831 systemd-networkd[1209]: califbfcb265d93: Gained IPv6LL Jun 25 16:17:35.490000 audit[4526]: NETFILTER_CFG table=filter:99 family=2 entries=10 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:35.490000 audit[4526]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffecdd303b0 a2=0 a3=7ffecdd3039c items=0 ppid=2737 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:35.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:35.579000 audit[4526]: NETFILTER_CFG table=nat:100 family=2 entries=54 op=nft_register_chain pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:35.579000 audit[4526]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffecdd303b0 a2=0 a3=7ffecdd3039c items=0 ppid=2737 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:35.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:35.599724 systemd-networkd[1209]: cali1a6be56525a: Gained IPv6LL Jun 25 16:17:35.891729 containerd[1445]: time="2024-06-25T16:17:35.891496914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:35.897923 containerd[1445]: time="2024-06-25T16:17:35.897893940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:17:35.913688 containerd[1445]: time="2024-06-25T16:17:35.913672516Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:35.924882 containerd[1445]: time="2024-06-25T16:17:35.924868349Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:35.932963 containerd[1445]: time="2024-06-25T16:17:35.932949354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:35.933923 containerd[1445]: time="2024-06-25T16:17:35.933895867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.261591534s" Jun 25 16:17:35.933997 containerd[1445]: time="2024-06-25T16:17:35.933977343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:17:35.934897 containerd[1445]: time="2024-06-25T16:17:35.934884396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:17:35.957317 containerd[1445]: time="2024-06-25T16:17:35.957293240Z" level=info msg="CreateContainer within sandbox \"aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:17:35.966812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527325776.mount: Deactivated successfully. Jun 25 16:17:35.969128 containerd[1445]: time="2024-06-25T16:17:35.969084299Z" level=info msg="CreateContainer within sandbox \"aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2f13a7a707c80ec9840afc83b4fc716af3455474b4407b19b54d80625db6afc0\"" Jun 25 16:17:35.970155 containerd[1445]: time="2024-06-25T16:17:35.970137069Z" level=info msg="StartContainer for \"2f13a7a707c80ec9840afc83b4fc716af3455474b4407b19b54d80625db6afc0\"" Jun 25 16:17:36.031907 containerd[1445]: time="2024-06-25T16:17:36.031881700Z" level=info msg="StartContainer for \"2f13a7a707c80ec9840afc83b4fc716af3455474b4407b19b54d80625db6afc0\" returns successfully" Jun 25 16:17:36.192367 kubelet[2601]: I0625 16:17:36.192336 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d8874f6d7-8dx8x" podStartSLOduration=26.929907499 podCreationTimestamp="2024-06-25 16:17:07 +0000 UTC" firstStartedPulling="2024-06-25 16:17:33.671850131 +0000 UTC m=+45.788566978" lastFinishedPulling="2024-06-25 16:17:35.934252306 +0000 UTC m=+48.050969162" observedRunningTime="2024-06-25 16:17:36.159538245 +0000 UTC m=+48.276255101" watchObservedRunningTime="2024-06-25 16:17:36.192309683 +0000 UTC m=+48.309026532" Jun 25 16:17:36.701892 kubelet[2601]: I0625 16:17:36.701867 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:17:36.887000 audit[4606]: NETFILTER_CFG table=filter:101 family=2 entries=9 op=nft_register_rule pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:36.887000 audit[4606]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe6840f150 a2=0 a3=7ffe6840f13c items=0 ppid=2737 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:36.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:36.887000 audit[4606]: NETFILTER_CFG table=nat:102 family=2 entries=25 op=nft_register_chain pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:36.887000 audit[4606]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffe6840f150 a2=0 a3=7ffe6840f13c items=0 ppid=2737 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:36.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:37.647569 systemd-networkd[1209]: vxlan.calico: Link UP Jun 25 16:17:37.647575 systemd-networkd[1209]: vxlan.calico: Gained carrier Jun 25 16:17:37.669000 audit: BPF prog-id=10 op=LOAD Jun 25 16:17:37.669000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc14d86e60 a2=70 a3=7f912d1e4000 items=0 ppid=4607 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.669000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:17:37.669000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:17:37.669000 audit: BPF prog-id=11 op=LOAD Jun 25 16:17:37.669000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc14d86e60 a2=70 a3=6f items=0 ppid=4607 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.669000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:17:37.669000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:17:37.669000 audit: BPF prog-id=12 op=LOAD Jun 25 16:17:37.669000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc14d86df0 a2=70 a3=7ffc14d86e60 items=0 ppid=4607 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.669000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:17:37.669000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:17:37.670000 audit: BPF prog-id=13 op=LOAD Jun 25 16:17:37.670000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc14d86e20 a2=70 a3=0 items=0 ppid=4607 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.670000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:17:37.677000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:17:37.809000 audit[4732]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:17:37.809000 audit[4732]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff31b09f10 a2=0 a3=7fff31b09efc items=0 ppid=4607 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.809000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:17:37.826000 audit[4731]: NETFILTER_CFG table=raw:104 family=2 entries=19 op=nft_register_chain pid=4731 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:17:37.826000 audit[4731]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fffa40c10a0 a2=0 a3=7fffa40c108c items=0 ppid=4607 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.826000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:17:37.842000 audit[4737]: NETFILTER_CFG table=nat:105 family=2 entries=15 op=nft_register_chain pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:17:37.842000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe1f6ea340 a2=0 a3=7ffe1f6ea32c items=0 ppid=4607 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.842000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:17:37.843000 audit[4734]: NETFILTER_CFG table=filter:106 family=2 entries=147 op=nft_register_chain pid=4734 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:17:37.843000 audit[4734]: SYSCALL arch=c000003e syscall=46 success=yes exit=83712 a0=3 a1=7ffc35bde600 a2=0 a3=7ffc35bde5ec items=0 ppid=4607 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:37.843000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:17:37.964093 containerd[1445]: time="2024-06-25T16:17:37.964063676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:37.964862 containerd[1445]: time="2024-06-25T16:17:37.964829130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:17:37.964984 containerd[1445]: time="2024-06-25T16:17:37.964970602Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:37.965993 containerd[1445]: time="2024-06-25T16:17:37.965976709Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:37.966883 containerd[1445]: time="2024-06-25T16:17:37.966865656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:37.967378 containerd[1445]: time="2024-06-25T16:17:37.967356073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.032396049s" Jun 25 16:17:37.967420 containerd[1445]: time="2024-06-25T16:17:37.967380884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:17:37.969187 containerd[1445]: time="2024-06-25T16:17:37.969164090Z" level=info msg="CreateContainer within sandbox \"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:17:37.978852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538556970.mount: Deactivated successfully. Jun 25 16:17:37.983276 containerd[1445]: time="2024-06-25T16:17:37.983245993Z" level=info msg="CreateContainer within sandbox \"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"73fa5e107b75413302e143f2fadc78ba657692bc41064640e891a22177165a0d\"" Jun 25 16:17:37.983815 containerd[1445]: time="2024-06-25T16:17:37.983797053Z" level=info msg="StartContainer for \"73fa5e107b75413302e143f2fadc78ba657692bc41064640e891a22177165a0d\"" Jun 25 16:17:38.052642 containerd[1445]: time="2024-06-25T16:17:38.052592926Z" level=info msg="StartContainer for \"73fa5e107b75413302e143f2fadc78ba657692bc41064640e891a22177165a0d\" returns successfully" Jun 25 16:17:38.081211 containerd[1445]: time="2024-06-25T16:17:38.081005417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:17:38.975287 systemd[1]: run-containerd-runc-k8s.io-73fa5e107b75413302e143f2fadc78ba657692bc41064640e891a22177165a0d-runc.pWSqLm.mount: Deactivated successfully. Jun 25 16:17:39.183747 systemd-networkd[1209]: vxlan.calico: Gained IPv6LL Jun 25 16:17:40.815338 containerd[1445]: time="2024-06-25T16:17:40.815300247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:40.817740 containerd[1445]: time="2024-06-25T16:17:40.817713676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:17:40.818201 containerd[1445]: time="2024-06-25T16:17:40.818185617Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:40.818971 containerd[1445]: time="2024-06-25T16:17:40.818956418Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:40.820184 containerd[1445]: time="2024-06-25T16:17:40.820169627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:40.820666 containerd[1445]: time="2024-06-25T16:17:40.820647193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.739612057s" Jun 25 16:17:40.820704 containerd[1445]: time="2024-06-25T16:17:40.820670450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:17:40.824303 containerd[1445]: time="2024-06-25T16:17:40.824275797Z" level=info msg="CreateContainer within sandbox \"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:17:40.835343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821736762.mount: Deactivated successfully. Jun 25 16:17:40.840883 containerd[1445]: time="2024-06-25T16:17:40.840116435Z" level=info msg="CreateContainer within sandbox \"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e641328e68c0f1df5f9ceeecb2b21812c918ab69805552f1f67342571389c87b\"" Jun 25 16:17:40.842993 containerd[1445]: time="2024-06-25T16:17:40.842967467Z" level=info msg="StartContainer for \"e641328e68c0f1df5f9ceeecb2b21812c918ab69805552f1f67342571389c87b\"" Jun 25 16:17:40.906709 containerd[1445]: time="2024-06-25T16:17:40.906680313Z" level=info msg="StartContainer for \"e641328e68c0f1df5f9ceeecb2b21812c918ab69805552f1f67342571389c87b\" returns successfully" Jun 25 16:17:41.226048 kubelet[2601]: I0625 16:17:41.226031 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-sdrw2" podStartSLOduration=27.746432648 podCreationTimestamp="2024-06-25 16:17:07 +0000 UTC" firstStartedPulling="2024-06-25 16:17:34.341257656 +0000 UTC m=+46.457974509" lastFinishedPulling="2024-06-25 16:17:40.820830388 +0000 UTC m=+52.937547235" observedRunningTime="2024-06-25 16:17:41.225737827 +0000 UTC m=+53.342454684" watchObservedRunningTime="2024-06-25 16:17:41.226005374 +0000 UTC m=+53.342722226" Jun 25 16:17:41.832431 systemd[1]: run-containerd-runc-k8s.io-e641328e68c0f1df5f9ceeecb2b21812c918ab69805552f1f67342571389c87b-runc.Is9J3U.mount: Deactivated successfully. Jun 25 16:17:41.838545 kubelet[2601]: I0625 16:17:41.838524 2601 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:17:41.838721 kubelet[2601]: I0625 16:17:41.838712 2601 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:17:48.022735 containerd[1445]: time="2024-06-25T16:17:48.022708087Z" level=info msg="StopPodSandbox for \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\"" Jun 25 16:17:48.107863 systemd[1]: run-containerd-runc-k8s.io-2f13a7a707c80ec9840afc83b4fc716af3455474b4407b19b54d80625db6afc0-runc.auHLVw.mount: Deactivated successfully. Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.164 [WARNING][4861] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdrw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18093c76-756e-42a9-853b-9dc1cb0c45f6", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057", Pod:"csi-node-driver-sdrw2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1a6be56525a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.164 [INFO][4861] k8s.go 608: Cleaning up netns ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.164 [INFO][4861] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" iface="eth0" netns="" Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.164 [INFO][4861] k8s.go 615: Releasing IP address(es) ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.164 [INFO][4861] utils.go 188: Calico CNI releasing IP address ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.179 [INFO][4888] ipam_plugin.go 411: Releasing address using handleID ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.179 [INFO][4888] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.179 [INFO][4888] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.182 [WARNING][4888] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.182 [INFO][4888] ipam_plugin.go 439: Releasing address using workloadID ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.183 [INFO][4888] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.186058 containerd[1445]: 2024-06-25 16:17:48.184 [INFO][4861] k8s.go 621: Teardown processing complete. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.186910 containerd[1445]: time="2024-06-25T16:17:48.186367049Z" level=info msg="TearDown network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\" successfully" Jun 25 16:17:48.186910 containerd[1445]: time="2024-06-25T16:17:48.186386711Z" level=info msg="StopPodSandbox for \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\" returns successfully" Jun 25 16:17:48.187019 containerd[1445]: time="2024-06-25T16:17:48.186906452Z" level=info msg="RemovePodSandbox for \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\"" Jun 25 16:17:48.200054 containerd[1445]: time="2024-06-25T16:17:48.188308817Z" level=info msg="Forcibly stopping sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\"" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.220 [WARNING][4906] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdrw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18093c76-756e-42a9-853b-9dc1cb0c45f6", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1dd8d599eaee335c6c27cfe1c56e5cd6daaf45071a46e0fef4c780dcbdfb3057", Pod:"csi-node-driver-sdrw2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1a6be56525a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.220 [INFO][4906] k8s.go 608: Cleaning up netns ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.220 [INFO][4906] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" iface="eth0" netns="" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.220 [INFO][4906] k8s.go 615: Releasing IP address(es) ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.220 [INFO][4906] utils.go 188: Calico CNI releasing IP address ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.242 [INFO][4913] ipam_plugin.go 411: Releasing address using handleID ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.243 [INFO][4913] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.243 [INFO][4913] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.247 [WARNING][4913] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.247 [INFO][4913] ipam_plugin.go 439: Releasing address using workloadID ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" HandleID="k8s-pod-network.447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Workload="localhost-k8s-csi--node--driver--sdrw2-eth0" Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.248 [INFO][4913] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.250026 containerd[1445]: 2024-06-25 16:17:48.249 [INFO][4906] k8s.go 621: Teardown processing complete. ContainerID="447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866" Jun 25 16:17:48.250351 containerd[1445]: time="2024-06-25T16:17:48.250054163Z" level=info msg="TearDown network for sandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\" successfully" Jun 25 16:17:48.255979 containerd[1445]: time="2024-06-25T16:17:48.255961775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:17:48.257485 containerd[1445]: time="2024-06-25T16:17:48.257451383Z" level=info msg="RemovePodSandbox \"447de8f379ff29c3c97b6cc549a5db98057652b54134334eb922a4d140ad3866\" returns successfully" Jun 25 16:17:48.257928 containerd[1445]: time="2024-06-25T16:17:48.257913502Z" level=info msg="StopPodSandbox for \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\"" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.277 [WARNING][4931] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--9zfw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84", Pod:"coredns-5dd5756b68-9zfw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbfcb265d93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.277 [INFO][4931] k8s.go 608: Cleaning up netns ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.277 [INFO][4931] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" iface="eth0" netns="" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.277 [INFO][4931] k8s.go 615: Releasing IP address(es) ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.277 [INFO][4931] utils.go 188: Calico CNI releasing IP address ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.291 [INFO][4938] ipam_plugin.go 411: Releasing address using handleID ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.291 [INFO][4938] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.291 [INFO][4938] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.295 [WARNING][4938] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.295 [INFO][4938] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.295 [INFO][4938] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.298233 containerd[1445]: 2024-06-25 16:17:48.296 [INFO][4931] k8s.go 621: Teardown processing complete. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.299278 containerd[1445]: time="2024-06-25T16:17:48.298184504Z" level=info msg="TearDown network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\" successfully" Jun 25 16:17:48.299278 containerd[1445]: time="2024-06-25T16:17:48.298255832Z" level=info msg="StopPodSandbox for \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\" returns successfully" Jun 25 16:17:48.299278 containerd[1445]: time="2024-06-25T16:17:48.298626110Z" level=info msg="RemovePodSandbox for \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\"" Jun 25 16:17:48.299278 containerd[1445]: time="2024-06-25T16:17:48.298646041Z" level=info msg="Forcibly stopping sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\"" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.321 [WARNING][4956] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--9zfw8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9f501b7c-d0db-4a7b-9ae4-5ee408b81d2c", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1780627971bc3b5f30494af6aba8a5fb5b6f6cf68d477bf1299f317239aae84", Pod:"coredns-5dd5756b68-9zfw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbfcb265d93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.321 [INFO][4956] k8s.go 608: Cleaning up netns ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.321 [INFO][4956] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" iface="eth0" netns="" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.321 [INFO][4956] k8s.go 615: Releasing IP address(es) ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.321 [INFO][4956] utils.go 188: Calico CNI releasing IP address ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.334 [INFO][4962] ipam_plugin.go 411: Releasing address using handleID ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.334 [INFO][4962] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.334 [INFO][4962] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.341 [WARNING][4962] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.341 [INFO][4962] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" HandleID="k8s-pod-network.3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Workload="localhost-k8s-coredns--5dd5756b68--9zfw8-eth0" Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.342 [INFO][4962] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.344307 containerd[1445]: 2024-06-25 16:17:48.343 [INFO][4956] k8s.go 621: Teardown processing complete. ContainerID="3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb" Jun 25 16:17:48.345066 containerd[1445]: time="2024-06-25T16:17:48.344347484Z" level=info msg="TearDown network for sandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\" successfully" Jun 25 16:17:48.345657 containerd[1445]: time="2024-06-25T16:17:48.345639604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:17:48.345690 containerd[1445]: time="2024-06-25T16:17:48.345672212Z" level=info msg="RemovePodSandbox \"3fdbb1cc8c54bf3882e31a3801da83b190654b8e5827a9d5e840592e617d39bb\" returns successfully" Jun 25 16:17:48.346001 containerd[1445]: time="2024-06-25T16:17:48.345985422Z" level=info msg="StopPodSandbox for \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\"" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.371 [WARNING][4980] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0", GenerateName:"calico-kube-controllers-6d8874f6d7-", Namespace:"calico-system", SelfLink:"", UID:"087c036b-d5d3-4f8e-b3ce-55f1030399f4", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8874f6d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b", Pod:"calico-kube-controllers-6d8874f6d7-8dx8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali094c514e434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.371 [INFO][4980] k8s.go 608: Cleaning up netns ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.371 [INFO][4980] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" iface="eth0" netns="" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.371 [INFO][4980] k8s.go 615: Releasing IP address(es) ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.371 [INFO][4980] utils.go 188: Calico CNI releasing IP address ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.384 [INFO][4986] ipam_plugin.go 411: Releasing address using handleID ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.384 [INFO][4986] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.384 [INFO][4986] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.387 [WARNING][4986] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.387 [INFO][4986] ipam_plugin.go 439: Releasing address using workloadID ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.390 [INFO][4986] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.392283 containerd[1445]: 2024-06-25 16:17:48.391 [INFO][4980] k8s.go 621: Teardown processing complete. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.392635 containerd[1445]: time="2024-06-25T16:17:48.392305362Z" level=info msg="TearDown network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\" successfully" Jun 25 16:17:48.392635 containerd[1445]: time="2024-06-25T16:17:48.392330063Z" level=info msg="StopPodSandbox for \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\" returns successfully" Jun 25 16:17:48.392635 containerd[1445]: time="2024-06-25T16:17:48.392540448Z" level=info msg="RemovePodSandbox for \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\"" Jun 25 16:17:48.392635 containerd[1445]: time="2024-06-25T16:17:48.392557219Z" level=info msg="Forcibly stopping sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\"" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.414 [WARNING][5004] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0", GenerateName:"calico-kube-controllers-6d8874f6d7-", Namespace:"calico-system", SelfLink:"", UID:"087c036b-d5d3-4f8e-b3ce-55f1030399f4", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8874f6d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aba9c7f466be8fa925c3ff3c4af67e0409dabeb46b0dbfbdbea20e6262147a2b", Pod:"calico-kube-controllers-6d8874f6d7-8dx8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali094c514e434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.414 [INFO][5004] k8s.go 608: Cleaning up netns ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.414 [INFO][5004] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" iface="eth0" netns="" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.414 [INFO][5004] k8s.go 615: Releasing IP address(es) ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.414 [INFO][5004] utils.go 188: Calico CNI releasing IP address ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.426 [INFO][5011] ipam_plugin.go 411: Releasing address using handleID ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.426 [INFO][5011] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.426 [INFO][5011] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.429 [WARNING][5011] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.429 [INFO][5011] ipam_plugin.go 439: Releasing address using workloadID ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" HandleID="k8s-pod-network.26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Workload="localhost-k8s-calico--kube--controllers--6d8874f6d7--8dx8x-eth0" Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.430 [INFO][5011] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.432430 containerd[1445]: 2024-06-25 16:17:48.431 [INFO][5004] k8s.go 621: Teardown processing complete. ContainerID="26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506" Jun 25 16:17:48.432808 containerd[1445]: time="2024-06-25T16:17:48.432788539Z" level=info msg="TearDown network for sandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\" successfully" Jun 25 16:17:48.433962 containerd[1445]: time="2024-06-25T16:17:48.433949151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:17:48.434034 containerd[1445]: time="2024-06-25T16:17:48.434023361Z" level=info msg="RemovePodSandbox \"26346dc0b236cb23535a285f76db96a74c5c0e5c8a725ea158ec6ea9e9e03506\" returns successfully" Jun 25 16:17:48.434373 containerd[1445]: time="2024-06-25T16:17:48.434351033Z" level=info msg="StopPodSandbox for \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\"" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.458 [WARNING][5029] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xqfmx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a650a4d-de28-477d-828b-814dd78885cf", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a", Pod:"coredns-5dd5756b68-xqfmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadcafb6a9f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.458 [INFO][5029] k8s.go 608: Cleaning up netns ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.458 [INFO][5029] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" iface="eth0" netns="" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.458 [INFO][5029] k8s.go 615: Releasing IP address(es) ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.458 [INFO][5029] utils.go 188: Calico CNI releasing IP address ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.470 [INFO][5035] ipam_plugin.go 411: Releasing address using handleID ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.470 [INFO][5035] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.470 [INFO][5035] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.473 [WARNING][5035] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.473 [INFO][5035] ipam_plugin.go 439: Releasing address using workloadID ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.474 [INFO][5035] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.476424 containerd[1445]: 2024-06-25 16:17:48.475 [INFO][5029] k8s.go 621: Teardown processing complete. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.476424 containerd[1445]: time="2024-06-25T16:17:48.476321236Z" level=info msg="TearDown network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\" successfully" Jun 25 16:17:48.476424 containerd[1445]: time="2024-06-25T16:17:48.476339587Z" level=info msg="StopPodSandbox for \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\" returns successfully" Jun 25 16:17:48.476916 containerd[1445]: time="2024-06-25T16:17:48.476653250Z" level=info msg="RemovePodSandbox for \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\"" Jun 25 16:17:48.476916 containerd[1445]: time="2024-06-25T16:17:48.476671095Z" level=info msg="Forcibly stopping sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\"" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.498 [WARNING][5054] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xqfmx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a650a4d-de28-477d-828b-814dd78885cf", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a884f156b460053742e1486d00059a1fafe261a4f89772307b2bb7e7f907237a", Pod:"coredns-5dd5756b68-xqfmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadcafb6a9f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.498 [INFO][5054] k8s.go 608: Cleaning up netns ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.498 [INFO][5054] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" iface="eth0" netns="" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.498 [INFO][5054] k8s.go 615: Releasing IP address(es) ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.498 [INFO][5054] utils.go 188: Calico CNI releasing IP address ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.512 [INFO][5060] ipam_plugin.go 411: Releasing address using handleID ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.512 [INFO][5060] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.512 [INFO][5060] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.516 [WARNING][5060] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.516 [INFO][5060] ipam_plugin.go 439: Releasing address using workloadID ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" HandleID="k8s-pod-network.abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Workload="localhost-k8s-coredns--5dd5756b68--xqfmx-eth0" Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.516 [INFO][5060] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:48.518761 containerd[1445]: 2024-06-25 16:17:48.517 [INFO][5054] k8s.go 621: Teardown processing complete. ContainerID="abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37" Jun 25 16:17:48.519158 containerd[1445]: time="2024-06-25T16:17:48.519139040Z" level=info msg="TearDown network for sandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\" successfully" Jun 25 16:17:48.520405 containerd[1445]: time="2024-06-25T16:17:48.520392084Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:17:48.520506 containerd[1445]: time="2024-06-25T16:17:48.520490739Z" level=info msg="RemovePodSandbox \"abed09f5cc119a0c0ff4d042cb23edad545d5c24fbca705e406d3b3993b20f37\" returns successfully" Jun 25 16:17:50.405908 systemd[1]: run-containerd-runc-k8s.io-c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34-runc.dMK3l0.mount: Deactivated successfully. Jun 25 16:17:52.440000 audit[5089]: NETFILTER_CFG table=filter:107 family=2 entries=9 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.442910 kernel: kauditd_printk_skb: 42 callbacks suppressed Jun 25 16:17:52.442959 kernel: audit: type=1325 audit(1719332272.440:313): table=filter:107 family=2 entries=9 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.440000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffce2374740 a2=0 a3=7ffce237472c items=0 ppid=2737 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:52.446391 kernel: audit: type=1300 audit(1719332272.440:313): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffce2374740 a2=0 a3=7ffce237472c items=0 ppid=2737 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:52.446443 kernel: audit: type=1327 audit(1719332272.440:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:52.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:52.443000 audit[5089]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.443000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce2374740 a2=0 a3=7ffce237472c items=0 ppid=2737 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:52.453356 kernel: audit: type=1325 audit(1719332272.443:314): table=nat:108 family=2 entries=20 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.453408 kernel: audit: type=1300 audit(1719332272.443:314): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce2374740 a2=0 a3=7ffce237472c items=0 ppid=2737 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:52.453429 kernel: audit: type=1327 audit(1719332272.443:314): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:52.443000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:52.461000 audit[5091]: NETFILTER_CFG table=filter:109 family=2 entries=10 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.461000 audit[5091]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff82230690 a2=0 a3=7fff8223067c items=0 ppid=2737 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:52.467380 kernel: audit: type=1325 audit(1719332272.461:315): table=filter:109 family=2 entries=10 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.467441 kernel: audit: type=1300 audit(1719332272.461:315): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff82230690 a2=0 a3=7fff8223067c items=0 ppid=2737 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:52.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:52.467639 kernel: audit: type=1327 audit(1719332272.461:315): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:52.487203 kernel: audit: type=1325 audit(1719332272.467:316): table=nat:110 family=2 entries=20 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.467000 audit[5091]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:52.467000 audit[5091]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff82230690 a2=0 a3=7fff8223067c items=0 ppid=2737 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:52.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:52.593722 kubelet[2601]: I0625 16:17:52.593700 2601 topology_manager.go:215] "Topology Admit Handler" podUID="491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a" podNamespace="calico-apiserver" podName="calico-apiserver-6b44b94676-zswfz" Jun 25 16:17:52.594124 kubelet[2601]: I0625 16:17:52.594116 2601 topology_manager.go:215] "Topology Admit Handler" podUID="f2993fe4-df7b-424c-84df-7b5da0bc206a" podNamespace="calico-apiserver" podName="calico-apiserver-6b44b94676-q8sjk" Jun 25 16:17:52.681546 kubelet[2601]: I0625 16:17:52.681525 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwhc\" (UniqueName: \"kubernetes.io/projected/491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a-kube-api-access-dxwhc\") pod \"calico-apiserver-6b44b94676-zswfz\" (UID: \"491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a\") " pod="calico-apiserver/calico-apiserver-6b44b94676-zswfz" Jun 25 16:17:52.681654 kubelet[2601]: I0625 16:17:52.681559 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2993fe4-df7b-424c-84df-7b5da0bc206a-calico-apiserver-certs\") pod \"calico-apiserver-6b44b94676-q8sjk\" (UID: \"f2993fe4-df7b-424c-84df-7b5da0bc206a\") " pod="calico-apiserver/calico-apiserver-6b44b94676-q8sjk" Jun 25 16:17:52.681654 kubelet[2601]: I0625 16:17:52.681573 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2slfg\" (UniqueName: \"kubernetes.io/projected/f2993fe4-df7b-424c-84df-7b5da0bc206a-kube-api-access-2slfg\") pod \"calico-apiserver-6b44b94676-q8sjk\" (UID: \"f2993fe4-df7b-424c-84df-7b5da0bc206a\") " pod="calico-apiserver/calico-apiserver-6b44b94676-q8sjk" Jun 25 16:17:52.681654 kubelet[2601]: I0625 16:17:52.681587 2601 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a-calico-apiserver-certs\") pod \"calico-apiserver-6b44b94676-zswfz\" (UID: \"491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a\") " pod="calico-apiserver/calico-apiserver-6b44b94676-zswfz" Jun 25 16:17:52.899371 containerd[1445]: time="2024-06-25T16:17:52.898372104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b44b94676-zswfz,Uid:491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:17:52.899371 containerd[1445]: time="2024-06-25T16:17:52.898372652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b44b94676-q8sjk,Uid:f2993fe4-df7b-424c-84df-7b5da0bc206a,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:17:53.018229 systemd-networkd[1209]: cali278dd3573ba: Link UP Jun 25 16:17:53.019629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:17:53.020320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali278dd3573ba: link becomes ready Jun 25 16:17:53.020360 systemd-networkd[1209]: cali278dd3573ba: Gained carrier Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.947 [INFO][5109] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0 calico-apiserver-6b44b94676- calico-apiserver 491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a 824 0 2024-06-25 16:17:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b44b94676 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b44b94676-zswfz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali278dd3573ba [] []}} ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.947 [INFO][5109] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.976 [INFO][5123] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" HandleID="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Workload="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.983 [INFO][5123] ipam_plugin.go 264: Auto assigning IP ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" HandleID="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Workload="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000511610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b44b94676-zswfz", "timestamp":"2024-06-25 16:17:52.976940282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.983 [INFO][5123] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.983 [INFO][5123] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.983 [INFO][5123] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.984 [INFO][5123] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.986 [INFO][5123] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.989 [INFO][5123] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.991 [INFO][5123] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.992 [INFO][5123] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.992 [INFO][5123] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.993 [INFO][5123] ipam.go 1685: Creating new handle: k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085 Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.995 [INFO][5123] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.999 [INFO][5123] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:52.999 [INFO][5123] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" host="localhost" Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:53.000 [INFO][5123] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:53.028220 containerd[1445]: 2024-06-25 16:17:53.000 [INFO][5123] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" HandleID="k8s-pod-network.0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Workload="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" Jun 25 16:17:53.029931 containerd[1445]: 2024-06-25 16:17:53.011 [INFO][5109] k8s.go 386: Populated endpoint ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0", GenerateName:"calico-apiserver-6b44b94676-", Namespace:"calico-apiserver", SelfLink:"", UID:"491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b44b94676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b44b94676-zswfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali278dd3573ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:53.029931 containerd[1445]: 2024-06-25 16:17:53.011 [INFO][5109] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" Jun 25 16:17:53.029931 containerd[1445]: 2024-06-25 16:17:53.011 [INFO][5109] dataplane_linux.go 68: Setting the host side veth name to cali278dd3573ba ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" Jun 25 16:17:53.029931 containerd[1445]: 2024-06-25 16:17:53.020 [INFO][5109] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" Jun 25 16:17:53.029931 containerd[1445]: 2024-06-25 16:17:53.020 [INFO][5109] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0", GenerateName:"calico-apiserver-6b44b94676-", Namespace:"calico-apiserver", SelfLink:"", UID:"491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b44b94676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085", Pod:"calico-apiserver-6b44b94676-zswfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali278dd3573ba", MAC:"4e:7d:fa:a7:75:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:53.029931 containerd[1445]: 2024-06-25 16:17:53.025 [INFO][5109] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-zswfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--zswfz-eth0" Jun 25 16:17:53.040339 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic19f75f2307: link becomes ready Jun 25 16:17:53.040138 systemd-networkd[1209]: calic19f75f2307: Link UP Jun 25 16:17:53.040227 systemd-networkd[1209]: calic19f75f2307: Gained carrier Jun 25 16:17:53.053000 audit[5153]: NETFILTER_CFG table=filter:111 family=2 entries=55 op=nft_register_chain pid=5153 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:17:53.053000 audit[5153]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffd3ce0dcd0 a2=0 a3=7ffd3ce0dcbc items=0 ppid=4607 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:53.053000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:52.947 [INFO][5097] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0 calico-apiserver-6b44b94676- calico-apiserver f2993fe4-df7b-424c-84df-7b5da0bc206a 825 0 2024-06-25 16:17:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b44b94676 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b44b94676-q8sjk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic19f75f2307 [] []}} ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:52.947 [INFO][5097] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:52.976 [INFO][5124] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" HandleID="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Workload="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:52.985 [INFO][5124] ipam_plugin.go 264: Auto assigning IP ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" HandleID="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Workload="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b44b94676-q8sjk", "timestamp":"2024-06-25 16:17:52.976723695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:52.985 [INFO][5124] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:52.999 [INFO][5124] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:52.999 [INFO][5124] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.000 [INFO][5124] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.003 [INFO][5124] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.005 [INFO][5124] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.006 [INFO][5124] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.007 [INFO][5124] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.007 [INFO][5124] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.008 [INFO][5124] ipam.go 1685: Creating new handle: k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676 Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.010 [INFO][5124] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.029 [INFO][5124] ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.029 [INFO][5124] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" host="localhost" Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.029 [INFO][5124] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:17:53.061927 containerd[1445]: 2024-06-25 16:17:53.029 [INFO][5124] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" HandleID="k8s-pod-network.e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Workload="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" Jun 25 16:17:53.063186 containerd[1445]: 2024-06-25 16:17:53.036 [INFO][5097] k8s.go 386: Populated endpoint ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0", GenerateName:"calico-apiserver-6b44b94676-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2993fe4-df7b-424c-84df-7b5da0bc206a", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b44b94676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b44b94676-q8sjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic19f75f2307", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:53.063186 containerd[1445]: 2024-06-25 16:17:53.036 [INFO][5097] k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" Jun 25 16:17:53.063186 containerd[1445]: 2024-06-25 16:17:53.036 [INFO][5097] dataplane_linux.go 68: Setting the host side veth name to calic19f75f2307 ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" Jun 25 16:17:53.063186 containerd[1445]: 2024-06-25 16:17:53.037 [INFO][5097] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" Jun 25 16:17:53.063186 containerd[1445]: 2024-06-25 16:17:53.041 [INFO][5097] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0", GenerateName:"calico-apiserver-6b44b94676-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2993fe4-df7b-424c-84df-7b5da0bc206a", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b44b94676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676", Pod:"calico-apiserver-6b44b94676-q8sjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic19f75f2307", MAC:"8e:41:66:c7:d8:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:17:53.063186 containerd[1445]: 2024-06-25 16:17:53.057 [INFO][5097] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676" Namespace="calico-apiserver" Pod="calico-apiserver-6b44b94676-q8sjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b44b94676--q8sjk-eth0" Jun 25 16:17:53.080000 audit[5164]: NETFILTER_CFG table=filter:112 family=2 entries=49 op=nft_register_chain pid=5164 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:17:53.080000 audit[5164]: SYSCALL arch=c000003e syscall=46 success=yes exit=24300 a0=3 a1=7ffc8a887460 a2=0 a3=7ffc8a88744c items=0 ppid=4607 pid=5164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:53.080000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:17:53.092690 containerd[1445]: time="2024-06-25T16:17:53.092613493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:53.092808 containerd[1445]: time="2024-06-25T16:17:53.092672795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:53.092808 containerd[1445]: time="2024-06-25T16:17:53.092688666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:53.092808 containerd[1445]: time="2024-06-25T16:17:53.092697535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:53.097223 containerd[1445]: time="2024-06-25T16:17:53.097070201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:17:53.097223 containerd[1445]: time="2024-06-25T16:17:53.097113404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:53.097223 containerd[1445]: time="2024-06-25T16:17:53.097127187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:17:53.097223 containerd[1445]: time="2024-06-25T16:17:53.097135456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:17:53.115002 systemd-resolved[1362]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:17:53.139473 systemd-resolved[1362]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:17:53.151355 containerd[1445]: time="2024-06-25T16:17:53.151284911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b44b94676-zswfz,Uid:491dd4ff-c23e-4d4c-814b-62a7ef6e1c2a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085\"" Jun 25 16:17:53.155073 containerd[1445]: time="2024-06-25T16:17:53.154489210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:17:53.165277 containerd[1445]: time="2024-06-25T16:17:53.165253979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b44b94676-q8sjk,Uid:f2993fe4-df7b-424c-84df-7b5da0bc206a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676\"" Jun 25 16:17:54.863742 systemd-networkd[1209]: calic19f75f2307: Gained IPv6LL Jun 25 16:17:54.991700 systemd-networkd[1209]: cali278dd3573ba: Gained IPv6LL Jun 25 16:17:55.130807 containerd[1445]: time="2024-06-25T16:17:55.130483236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:17:55.133996 containerd[1445]: time="2024-06-25T16:17:55.132255826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 1.976980445s" Jun 25 16:17:55.133996 containerd[1445]: time="2024-06-25T16:17:55.132281477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:17:55.133996 containerd[1445]: time="2024-06-25T16:17:55.133175279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:17:55.136220 containerd[1445]: time="2024-06-25T16:17:55.136199300Z" level=info msg="CreateContainer within sandbox \"0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:17:55.146968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831182340.mount: Deactivated successfully. Jun 25 16:17:55.152514 containerd[1445]: time="2024-06-25T16:17:55.152478387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.153039 containerd[1445]: time="2024-06-25T16:17:55.153015399Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.153496 containerd[1445]: time="2024-06-25T16:17:55.153479020Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.153908 containerd[1445]: time="2024-06-25T16:17:55.153888706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.168340 containerd[1445]: time="2024-06-25T16:17:55.168310728Z" level=info msg="CreateContainer within sandbox \"0e8f467456a05f9f1488935a5416b6fd115c33f50ec6a1271b67e1a8a9bd4085\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cde3684d3a3cc9e680f45d5db76abd2a822263142468de2ccd69cdd34aeda774\"" Jun 25 16:17:55.168933 containerd[1445]: time="2024-06-25T16:17:55.168913577Z" level=info msg="StartContainer for \"cde3684d3a3cc9e680f45d5db76abd2a822263142468de2ccd69cdd34aeda774\"" Jun 25 16:17:55.235365 containerd[1445]: time="2024-06-25T16:17:55.235330395Z" level=info msg="StartContainer for \"cde3684d3a3cc9e680f45d5db76abd2a822263142468de2ccd69cdd34aeda774\" returns successfully" Jun 25 16:17:55.681780 containerd[1445]: time="2024-06-25T16:17:55.681742673Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.684583 containerd[1445]: time="2024-06-25T16:17:55.684549977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 16:17:55.689143 containerd[1445]: time="2024-06-25T16:17:55.689111155Z" level=info msg="ImageUpdate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.689919 containerd[1445]: time="2024-06-25T16:17:55.689905400Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.690781 containerd[1445]: time="2024-06-25T16:17:55.690767793Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:17:55.691190 containerd[1445]: time="2024-06-25T16:17:55.691169705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 557.976369ms" Jun 25 16:17:55.691230 containerd[1445]: time="2024-06-25T16:17:55.691192368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:17:55.692644 containerd[1445]: time="2024-06-25T16:17:55.692614721Z" level=info msg="CreateContainer within sandbox \"e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:17:55.785436 containerd[1445]: time="2024-06-25T16:17:55.785400323Z" level=info msg="CreateContainer within sandbox \"e045c3efd2b96ee7a4f5a839ed63f65407d50f1a3c04e838a40695c548d78676\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"239aefc4b7a162fd9b5ae7360bb0bb8bcd665890e8fbd8f20660b63cfac7ea73\"" Jun 25 16:17:55.786877 containerd[1445]: time="2024-06-25T16:17:55.786728053Z" level=info msg="StartContainer for \"239aefc4b7a162fd9b5ae7360bb0bb8bcd665890e8fbd8f20660b63cfac7ea73\"" Jun 25 16:17:55.970717 containerd[1445]: time="2024-06-25T16:17:55.970652989Z" level=info msg="StartContainer for \"239aefc4b7a162fd9b5ae7360bb0bb8bcd665890e8fbd8f20660b63cfac7ea73\" returns successfully" Jun 25 16:17:56.143459 systemd[1]: run-containerd-runc-k8s.io-cde3684d3a3cc9e680f45d5db76abd2a822263142468de2ccd69cdd34aeda774-runc.JTbCeg.mount: Deactivated successfully. Jun 25 16:17:56.160087 systemd[1]: run-containerd-runc-k8s.io-2f13a7a707c80ec9840afc83b4fc716af3455474b4407b19b54d80625db6afc0-runc.QquJuK.mount: Deactivated successfully. Jun 25 16:17:56.243231 kubelet[2601]: I0625 16:17:56.243047 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b44b94676-zswfz" podStartSLOduration=2.252229963 podCreationTimestamp="2024-06-25 16:17:52 +0000 UTC" firstStartedPulling="2024-06-25 16:17:53.152955515 +0000 UTC m=+65.269672363" lastFinishedPulling="2024-06-25 16:17:55.132515225 +0000 UTC m=+67.249232073" observedRunningTime="2024-06-25 16:17:56.219747098 +0000 UTC m=+68.336463955" watchObservedRunningTime="2024-06-25 16:17:56.231789673 +0000 UTC m=+68.348506525" Jun 25 16:17:56.243773 kubelet[2601]: I0625 16:17:56.243762 2601 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b44b94676-q8sjk" podStartSLOduration=1.718555429 podCreationTimestamp="2024-06-25 16:17:52 +0000 UTC" firstStartedPulling="2024-06-25 16:17:53.166136754 +0000 UTC m=+65.282853602" lastFinishedPulling="2024-06-25 16:17:55.691318671 +0000 UTC m=+67.808035519" observedRunningTime="2024-06-25 16:17:56.243621367 +0000 UTC m=+68.360338218" watchObservedRunningTime="2024-06-25 16:17:56.243737346 +0000 UTC m=+68.360454199" Jun 25 16:17:56.391000 audit[5357]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=5357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:56.391000 audit[5357]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd52cf2e40 a2=0 a3=7ffd52cf2e2c items=0 ppid=2737 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:56.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:56.391000 audit[5357]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:56.391000 audit[5357]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd52cf2e40 a2=0 a3=7ffd52cf2e2c items=0 ppid=2737 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:56.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:56.403000 audit[5359]: NETFILTER_CFG table=filter:115 family=2 entries=9 op=nft_register_rule pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:56.403000 audit[5359]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffffe3c09d0 a2=0 a3=7ffffe3c09bc items=0 ppid=2737 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:56.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:56.405000 audit[5359]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:56.405000 audit[5359]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffffe3c09d0 a2=0 a3=7ffffe3c09bc items=0 ppid=2737 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:56.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:57.414000 audit[5361]: NETFILTER_CFG table=filter:117 family=2 entries=8 op=nft_register_rule pid=5361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:57.414000 audit[5361]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff7bdeecb0 a2=0 a3=7fff7bdeec9c items=0 ppid=2737 pid=5361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:57.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:17:57.416000 audit[5361]: NETFILTER_CFG table=nat:118 family=2 entries=34 op=nft_register_chain pid=5361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:17:57.416000 audit[5361]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff7bdeecb0 a2=0 a3=7fff7bdeec9c items=0 ppid=2737 pid=5361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:17:57.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:00.703012 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:56832.service - OpenSSH per-connection server daemon (139.178.68.195:56832). Jun 25 16:18:00.709419 kernel: kauditd_printk_skb: 26 callbacks suppressed Jun 25 16:18:00.709474 kernel: audit: type=1130 audit(1719332280.702:325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:56832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:00.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:56832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:00.782000 audit[5378]: USER_ACCT pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.785847 kernel: audit: type=1101 audit(1719332280.782:326): pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.785886 sshd[5378]: Accepted publickey for core from 139.178.68.195 port 56832 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:00.785000 audit[5378]: CRED_ACQ pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.789458 kernel: audit: type=1103 audit(1719332280.785:327): pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.789488 kernel: audit: type=1006 audit(1719332280.785:328): pid=5378 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:18:00.789503 kernel: audit: type=1300 audit(1719332280.785:328): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3ff090c0 a2=3 a3=7fd754bee480 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:00.785000 audit[5378]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3ff090c0 a2=3 a3=7fd754bee480 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:00.791511 kernel: audit: type=1327 audit(1719332280.785:328): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:00.785000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:00.792402 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:00.806355 systemd-logind[1426]: New session 10 of user core. Jun 25 16:18:00.821021 kernel: audit: type=1105 audit(1719332280.814:329): pid=5378 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.823687 kernel: audit: type=1103 audit(1719332280.818:330): pid=5381 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.814000 audit[5378]: USER_START pid=5378 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.818000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:00.811894 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:18:01.779764 sshd[5378]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:01.779000 audit[5378]: USER_END pid=5378 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:01.780000 audit[5378]: CRED_DISP pid=5378 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:01.785381 kernel: audit: type=1106 audit(1719332281.779:331): pid=5378 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:01.786228 kernel: audit: type=1104 audit(1719332281.780:332): pid=5378 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:01.789639 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:56832.service: Deactivated successfully. Jun 25 16:18:01.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:56832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:01.790628 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:18:01.790885 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:18:01.791426 systemd-logind[1426]: Removed session 10. Jun 25 16:18:06.785855 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:56844.service - OpenSSH per-connection server daemon (139.178.68.195:56844). Jun 25 16:18:06.787473 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:18:06.787520 kernel: audit: type=1130 audit(1719332286.785:334): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:56844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:06.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:56844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:06.860000 audit[5396]: USER_ACCT pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:06.862546 sshd[5396]: Accepted publickey for core from 139.178.68.195 port 56844 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:06.862000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:06.863609 sshd[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:06.865296 kernel: audit: type=1101 audit(1719332286.860:335): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:06.865332 kernel: audit: type=1103 audit(1719332286.862:336): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:06.865354 kernel: audit: type=1006 audit(1719332286.862:337): pid=5396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:18:06.866678 kernel: audit: type=1300 audit(1719332286.862:337): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff80f56a20 a2=3 a3=7ff570ee6480 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:06.862000 audit[5396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff80f56a20 a2=3 a3=7ff570ee6480 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:06.862000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:06.869816 kernel: audit: type=1327 audit(1719332286.862:337): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:06.872125 systemd-logind[1426]: New session 11 of user core. Jun 25 16:18:06.874849 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:18:06.877000 audit[5396]: USER_START pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:06.879000 audit[5399]: CRED_ACQ pid=5399 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:06.882931 kernel: audit: type=1105 audit(1719332286.877:338): pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:06.882978 kernel: audit: type=1103 audit(1719332286.879:339): pid=5399 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:07.373036 sshd[5396]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:07.373000 audit[5396]: USER_END pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:07.374000 audit[5396]: CRED_DISP pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:07.376455 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:56844.service: Deactivated successfully. Jun 25 16:18:07.376964 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:18:07.378043 kernel: audit: type=1106 audit(1719332287.373:340): pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:07.378077 kernel: audit: type=1104 audit(1719332287.374:341): pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:07.378448 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:18:07.379068 systemd-logind[1426]: Removed session 11. Jun 25 16:18:07.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:56844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:12.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:59444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:12.382739 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:18:12.382789 kernel: audit: type=1130 audit(1719332292.381:343): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:59444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:12.381966 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:59444.service - OpenSSH per-connection server daemon (139.178.68.195:59444). Jun 25 16:18:12.606000 audit[5418]: USER_ACCT pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.606922 sshd[5418]: Accepted publickey for core from 139.178.68.195 port 59444 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:12.608125 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:12.607000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.610816 kernel: audit: type=1101 audit(1719332292.606:344): pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.610866 kernel: audit: type=1103 audit(1719332292.607:345): pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.610883 kernel: audit: type=1006 audit(1719332292.607:346): pid=5418 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:18:12.607000 audit[5418]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff15e3a4d0 a2=3 a3=7fe485782480 items=0 ppid=1 pid=5418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:12.613929 kernel: audit: type=1300 audit(1719332292.607:346): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff15e3a4d0 a2=3 a3=7fe485782480 items=0 ppid=1 pid=5418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:12.613966 kernel: audit: type=1327 audit(1719332292.607:346): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:12.607000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:12.617225 systemd-logind[1426]: New session 12 of user core. Jun 25 16:18:12.633000 audit[5418]: USER_START pid=5418 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.643936 kernel: audit: type=1105 audit(1719332292.633:347): pid=5418 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.643996 kernel: audit: type=1103 audit(1719332292.634:348): pid=5421 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.634000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.630881 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:18:12.824874 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:59446.service - OpenSSH per-connection server daemon (139.178.68.195:59446). Jun 25 16:18:12.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:59446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:12.827630 kernel: audit: type=1130 audit(1719332292.824:349): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:59446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:12.839774 sshd[5418]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:12.840000 audit[5418]: USER_END pid=5418 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.846457 kernel: audit: type=1106 audit(1719332292.840:350): pid=5418 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.842000 audit[5418]: CRED_DISP pid=5418 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:59444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:12.844507 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:59444.service: Deactivated successfully. Jun 25 16:18:12.845219 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:18:12.845243 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:18:12.846718 systemd-logind[1426]: Removed session 12. Jun 25 16:18:12.889000 audit[5429]: USER_ACCT pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.890136 sshd[5429]: Accepted publickey for core from 139.178.68.195 port 59446 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:12.890000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.890000 audit[5429]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcf5701c0 a2=3 a3=7f49a2589480 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:12.890000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:12.896085 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:12.899294 systemd-logind[1426]: New session 13 of user core. Jun 25 16:18:12.903918 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:18:12.906000 audit[5429]: USER_START pid=5429 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:12.918000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.112580 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:59456.service - OpenSSH per-connection server daemon (139.178.68.195:59456). Jun 25 16:18:14.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:59456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:14.126433 sshd[5429]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:14.132000 audit[5429]: USER_END pid=5429 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.133000 audit[5429]: CRED_DISP pid=5429 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.135640 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:18:14.136406 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:59446.service: Deactivated successfully. Jun 25 16:18:14.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:59446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:14.136992 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:18:14.137566 systemd-logind[1426]: Removed session 13. Jun 25 16:18:14.166000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.167628 sshd[5443]: Accepted publickey for core from 139.178.68.195 port 59456 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:14.167000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.167000 audit[5443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1bfcf2b0 a2=3 a3=7f5a96a53480 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:14.167000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:14.169015 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:14.172319 systemd-logind[1426]: New session 14 of user core. Jun 25 16:18:14.176881 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:18:14.179000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.180000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.290762 sshd[5443]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:14.290000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.290000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:14.292715 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:18:14.292912 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:59456.service: Deactivated successfully. Jun 25 16:18:14.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:59456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:14.293469 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:18:14.294301 systemd-logind[1426]: Removed session 14. Jun 25 16:18:19.296868 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:45102.service - OpenSSH per-connection server daemon (139.178.68.195:45102). Jun 25 16:18:19.298381 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:18:19.298411 kernel: audit: type=1130 audit(1719332299.296:370): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:45102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:19.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:45102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:19.329000 audit[5493]: USER_ACCT pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.332940 sshd[5493]: Accepted publickey for core from 139.178.68.195 port 45102 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:19.333632 kernel: audit: type=1101 audit(1719332299.329:371): pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.333000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.336165 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:19.337569 kernel: audit: type=1103 audit(1719332299.333:372): pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.337602 kernel: audit: type=1006 audit(1719332299.333:373): pid=5493 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:18:19.340127 kernel: audit: type=1300 audit(1719332299.333:373): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7a486410 a2=3 a3=7feb8d570480 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:19.340644 kernel: audit: type=1327 audit(1719332299.333:373): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:19.333000 audit[5493]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7a486410 a2=3 a3=7feb8d570480 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:19.333000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:19.339883 systemd-logind[1426]: New session 15 of user core. Jun 25 16:18:19.341798 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:18:19.344000 audit[5493]: USER_START pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.344000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.349073 kernel: audit: type=1105 audit(1719332299.344:374): pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.349105 kernel: audit: type=1103 audit(1719332299.344:375): pid=5496 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.452825 sshd[5493]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:19.453000 audit[5493]: USER_END pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.453000 audit[5493]: CRED_DISP pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.457941 kernel: audit: type=1106 audit(1719332299.453:376): pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.457979 kernel: audit: type=1104 audit(1719332299.453:377): pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:19.460243 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:18:19.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:45102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:19.460628 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:45102.service: Deactivated successfully. Jun 25 16:18:19.461669 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:18:19.461995 systemd-logind[1426]: Removed session 15. Jun 25 16:18:20.430623 systemd[1]: run-containerd-runc-k8s.io-c226ece918507a009446cf70e6bb58c32614915e6f7cd8ea70ff780dab08dd34-runc.Seflk3.mount: Deactivated successfully. Jun 25 16:18:24.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:45116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:24.471229 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:18:24.471276 kernel: audit: type=1130 audit(1719332304.460:379): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:45116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:24.460942 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:45116.service - OpenSSH per-connection server daemon (139.178.68.195:45116). Jun 25 16:18:24.862000 audit[5527]: USER_ACCT pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.876795 kernel: audit: type=1101 audit(1719332304.862:380): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.876852 kernel: audit: type=1103 audit(1719332304.863:381): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.876869 kernel: audit: type=1006 audit(1719332304.863:382): pid=5527 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:18:24.876884 kernel: audit: type=1300 audit(1719332304.863:382): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa49e5d20 a2=3 a3=7f2a47fd4480 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:24.876900 kernel: audit: type=1327 audit(1719332304.863:382): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:24.863000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.863000 audit[5527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa49e5d20 a2=3 a3=7f2a47fd4480 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:24.863000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:24.877106 sshd[5527]: Accepted publickey for core from 139.178.68.195 port 45116 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:24.894581 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:24.906386 systemd-logind[1426]: New session 16 of user core. Jun 25 16:18:24.920010 kernel: audit: type=1105 audit(1719332304.914:383): pid=5527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.920056 kernel: audit: type=1103 audit(1719332304.917:384): pid=5530 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.914000 audit[5527]: USER_START pid=5527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.917000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:24.911780 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:18:25.417874 sshd[5527]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:25.418000 audit[5527]: USER_END pid=5527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:25.421057 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:45116.service: Deactivated successfully. Jun 25 16:18:25.421629 kernel: audit: type=1106 audit(1719332305.418:385): pid=5527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:25.418000 audit[5527]: CRED_DISP pid=5527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:25.421759 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:18:25.421775 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:18:25.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:45116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:25.424211 systemd-logind[1426]: Removed session 16. Jun 25 16:18:25.424637 kernel: audit: type=1104 audit(1719332305.418:386): pid=5527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.424806 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:18:30.424897 kernel: audit: type=1130 audit(1719332310.423:388): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.105:22-139.178.68.195:39722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.105:22-139.178.68.195:39722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.423919 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.68.195:39722.service - OpenSSH per-connection server daemon (139.178.68.195:39722). Jun 25 16:18:30.457000 audit[5546]: USER_ACCT pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.457903 sshd[5546]: Accepted publickey for core from 139.178.68.195 port 39722 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:30.458000 audit[5546]: CRED_ACQ pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.463221 kernel: audit: type=1101 audit(1719332310.457:389): pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.463262 kernel: audit: type=1103 audit(1719332310.458:390): pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.463282 kernel: audit: type=1006 audit(1719332310.458:391): pid=5546 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:18:30.459192 sshd[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:30.468409 kernel: audit: type=1300 audit(1719332310.458:391): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc035a87a0 a2=3 a3=7fcdf8c96480 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:30.468440 kernel: audit: type=1327 audit(1719332310.458:391): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:30.458000 audit[5546]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc035a87a0 a2=3 a3=7fcdf8c96480 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:30.458000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:30.463691 systemd-logind[1426]: New session 17 of user core. Jun 25 16:18:30.466920 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:18:30.470000 audit[5546]: USER_START pid=5546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.471000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.476504 kernel: audit: type=1105 audit(1719332310.470:392): pid=5546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.476567 kernel: audit: type=1103 audit(1719332310.471:393): pid=5549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.608191 sshd[5546]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:30.608000 audit[5546]: USER_END pid=5546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.611636 kernel: audit: type=1106 audit(1719332310.608:394): pid=5546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.608000 audit[5546]: CRED_DISP pid=5546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.613955 kernel: audit: type=1104 audit(1719332310.608:395): pid=5546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:30.613033 systemd[1]: sshd@14-139.178.70.105:22-139.178.68.195:39722.service: Deactivated successfully. Jun 25 16:18:30.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.105:22-139.178.68.195:39722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.613569 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:18:30.614153 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:18:30.614962 systemd-logind[1426]: Removed session 17. Jun 25 16:18:35.612896 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.68.195:39736.service - OpenSSH per-connection server daemon (139.178.68.195:39736). Jun 25 16:18:35.615860 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:18:35.615905 kernel: audit: type=1130 audit(1719332315.612:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.105:22-139.178.68.195:39736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:35.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.105:22-139.178.68.195:39736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:35.649000 audit[5563]: USER_ACCT pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.650669 sshd[5563]: Accepted publickey for core from 139.178.68.195 port 39736 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:35.649000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.653600 kernel: audit: type=1101 audit(1719332315.649:398): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.653643 kernel: audit: type=1103 audit(1719332315.649:399): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.653668 kernel: audit: type=1006 audit(1719332315.649:400): pid=5563 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:18:35.649000 audit[5563]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc51c90490 a2=3 a3=7fa4bb4cd480 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:35.663479 kernel: audit: type=1300 audit(1719332315.649:400): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc51c90490 a2=3 a3=7fa4bb4cd480 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:35.670042 kernel: audit: type=1327 audit(1719332315.649:400): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:35.649000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:35.655462 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:35.668000 audit[5563]: USER_START pid=5563 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.676608 kernel: audit: type=1105 audit(1719332315.668:401): pid=5563 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.676686 kernel: audit: type=1103 audit(1719332315.669:402): pid=5566 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.669000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.664656 systemd-logind[1426]: New session 18 of user core. Jun 25 16:18:35.665778 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:18:35.764422 sshd[5563]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:35.765000 audit[5563]: USER_END pid=5563 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.765000 audit[5563]: CRED_DISP pid=5563 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.768708 kernel: audit: type=1106 audit(1719332315.765:403): pid=5563 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.768739 kernel: audit: type=1104 audit(1719332315.765:404): pid=5563 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.771912 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.68.195:39746.service - OpenSSH per-connection server daemon (139.178.68.195:39746). Jun 25 16:18:35.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.105:22-139.178.68.195:39746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:35.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.105:22-139.178.68.195:39736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:35.772303 systemd[1]: sshd@15-139.178.70.105:22-139.178.68.195:39736.service: Deactivated successfully. Jun 25 16:18:35.773397 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:18:35.774195 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:18:35.774801 systemd-logind[1426]: Removed session 18. Jun 25 16:18:35.802000 audit[5575]: USER_ACCT pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.803243 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 39746 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:35.803000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.803000 audit[5575]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdccfe5240 a2=3 a3=7ff421da9480 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:35.803000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:35.804157 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:35.806755 systemd-logind[1426]: New session 19 of user core. Jun 25 16:18:35.810759 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:18:35.812000 audit[5575]: USER_START pid=5575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:35.813000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:36.152737 sshd[5575]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:36.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.105:22-139.178.68.195:39756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:36.153872 systemd[1]: Started sshd@17-139.178.70.105:22-139.178.68.195:39756.service - OpenSSH per-connection server daemon (139.178.68.195:39756). Jun 25 16:18:36.153000 audit[5575]: USER_END pid=5575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:36.154000 audit[5575]: CRED_DISP pid=5575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:36.156117 systemd[1]: sshd@16-139.178.70.105:22-139.178.68.195:39746.service: Deactivated successfully. Jun 25 16:18:36.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.105:22-139.178.68.195:39746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:36.157018 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:18:36.157042 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:18:36.158671 systemd-logind[1426]: Removed session 19. Jun 25 16:18:36.363000 audit[5586]: USER_ACCT pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:36.364009 sshd[5586]: Accepted publickey for core from 139.178.68.195 port 39756 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:36.365000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:36.365000 audit[5586]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb9a51ee0 a2=3 a3=7f37cd6c9480 items=0 ppid=1 pid=5586 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:36.365000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:36.383853 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:36.394242 systemd-logind[1426]: New session 20 of user core. Jun 25 16:18:36.396584 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:18:36.399000 audit[5586]: USER_START pid=5586 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:36.401000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:37.337421 sshd[5586]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:37.340000 audit[5586]: USER_END pid=5586 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:37.341000 audit[5586]: CRED_DISP pid=5586 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:37.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.105:22-139.178.68.195:39760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:37.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.105:22-139.178.68.195:39756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:37.343913 systemd[1]: Started sshd@18-139.178.70.105:22-139.178.68.195:39760.service - OpenSSH per-connection server daemon (139.178.68.195:39760). Jun 25 16:18:37.344365 systemd[1]: sshd@17-139.178.70.105:22-139.178.68.195:39756.service: Deactivated successfully. Jun 25 16:18:37.349547 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:18:37.350392 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:18:37.354000 audit[5605]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:37.354000 audit[5605]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff8c6eebe0 a2=0 a3=7fff8c6eebcc items=0 ppid=2737 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:37.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:37.355000 audit[5605]: NETFILTER_CFG table=nat:120 family=2 entries=22 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:37.355000 audit[5605]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff8c6eebe0 a2=0 a3=0 items=0 ppid=2737 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:37.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:37.361373 systemd-logind[1426]: Removed session 20. Jun 25 16:18:37.379000 audit[5608]: NETFILTER_CFG table=filter:121 family=2 entries=32 op=nft_register_rule pid=5608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:37.379000 audit[5608]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffdfae9d350 a2=0 a3=7ffdfae9d33c items=0 ppid=2737 pid=5608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:37.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:37.380000 audit[5608]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=5608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:37.380000 audit[5608]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdfae9d350 a2=0 a3=0 items=0 ppid=2737 pid=5608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:37.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:37.415000 audit[5602]: USER_ACCT pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:37.416688 sshd[5602]: Accepted publickey for core from 139.178.68.195 port 39760 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:37.416000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:37.416000 audit[5602]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd70cc37b0 a2=3 a3=7f98139de480 items=0 ppid=1 pid=5602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:37.416000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:37.417531 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:37.422739 systemd-logind[1426]: New session 21 of user core. Jun 25 16:18:37.426763 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:18:37.431000 audit[5602]: USER_START pid=5602 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:37.433000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.110625 sshd[5602]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:38.111000 audit[5602]: USER_END pid=5602 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.111000 audit[5602]: CRED_DISP pid=5602 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.105:22-139.178.68.195:51200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:38.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.105:22-139.178.68.195:39760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:38.114880 systemd[1]: Started sshd@19-139.178.70.105:22-139.178.68.195:51200.service - OpenSSH per-connection server daemon (139.178.68.195:51200). Jun 25 16:18:38.115285 systemd[1]: sshd@18-139.178.70.105:22-139.178.68.195:39760.service: Deactivated successfully. Jun 25 16:18:38.116639 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:18:38.116933 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:18:38.118680 systemd-logind[1426]: Removed session 21. Jun 25 16:18:38.176000 audit[5615]: USER_ACCT pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.176947 sshd[5615]: Accepted publickey for core from 139.178.68.195 port 51200 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:38.177000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.177000 audit[5615]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd2836e30 a2=3 a3=7fc2448ef480 items=0 ppid=1 pid=5615 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:38.177000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:38.178329 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:38.186830 systemd-logind[1426]: New session 22 of user core. Jun 25 16:18:38.191796 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:18:38.194000 audit[5615]: USER_START pid=5615 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.195000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.308169 sshd[5615]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:38.308000 audit[5615]: USER_END pid=5615 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.308000 audit[5615]: CRED_DISP pid=5615 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:38.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.105:22-139.178.68.195:51200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:38.310340 systemd[1]: sshd@19-139.178.70.105:22-139.178.68.195:51200.service: Deactivated successfully. Jun 25 16:18:38.311437 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:18:38.311660 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:18:38.312135 systemd-logind[1426]: Removed session 22. Jun 25 16:18:43.016000 audit[5638]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:43.018587 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:18:43.018656 kernel: audit: type=1325 audit(1719332323.016:446): table=filter:123 family=2 entries=20 op=nft_register_rule pid=5638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:43.018678 kernel: audit: type=1300 audit(1719332323.016:446): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffccc3f39c0 a2=0 a3=7ffccc3f39ac items=0 ppid=2737 pid=5638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:43.016000 audit[5638]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffccc3f39c0 a2=0 a3=7ffccc3f39ac items=0 ppid=2737 pid=5638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:43.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:43.021969 kernel: audit: type=1327 audit(1719332323.016:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:43.029796 kernel: audit: type=1325 audit(1719332323.017:447): table=nat:124 family=2 entries=106 op=nft_register_chain pid=5638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:43.029853 kernel: audit: type=1300 audit(1719332323.017:447): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffccc3f39c0 a2=0 a3=7ffccc3f39ac items=0 ppid=2737 pid=5638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:43.029872 kernel: audit: type=1327 audit(1719332323.017:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:43.017000 audit[5638]: NETFILTER_CFG table=nat:124 family=2 entries=106 op=nft_register_chain pid=5638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:18:43.017000 audit[5638]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffccc3f39c0 a2=0 a3=7ffccc3f39ac items=0 ppid=2737 pid=5638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:43.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:18:43.314940 systemd[1]: Started sshd@20-139.178.70.105:22-139.178.68.195:51202.service - OpenSSH per-connection server daemon (139.178.68.195:51202). Jun 25 16:18:43.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.105:22-139.178.68.195:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:43.319643 kernel: audit: type=1130 audit(1719332323.314:448): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.105:22-139.178.68.195:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:43.344000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.346415 sshd[5640]: Accepted publickey for core from 139.178.68.195 port 51202 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:43.347000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.348909 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:43.354420 kernel: audit: type=1101 audit(1719332323.344:449): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.354451 kernel: audit: type=1103 audit(1719332323.347:450): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.354474 kernel: audit: type=1006 audit(1719332323.348:451): pid=5640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:18:43.348000 audit[5640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed6a1cb90 a2=3 a3=7f14df107480 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:43.348000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:43.353871 systemd-logind[1426]: New session 23 of user core. Jun 25 16:18:43.366873 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:18:43.369000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.370000 audit[5643]: CRED_ACQ pid=5643 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.562785 sshd[5640]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:43.578000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.590000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:43.593126 systemd[1]: sshd@20-139.178.70.105:22-139.178.68.195:51202.service: Deactivated successfully. Jun 25 16:18:43.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.105:22-139.178.68.195:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:43.594284 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:18:43.594785 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:18:43.596154 systemd-logind[1426]: Removed session 23. Jun 25 16:18:48.570887 systemd[1]: Started sshd@21-139.178.70.105:22-139.178.68.195:34532.service - OpenSSH per-connection server daemon (139.178.68.195:34532). Jun 25 16:18:48.576723 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:18:48.576754 kernel: audit: type=1130 audit(1719332328.570:457): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.105:22-139.178.68.195:34532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:48.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.105:22-139.178.68.195:34532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:48.637000 audit[5674]: USER_ACCT pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.638055 sshd[5674]: Accepted publickey for core from 139.178.68.195 port 34532 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:48.639000 audit[5674]: CRED_ACQ pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.642096 kernel: audit: type=1101 audit(1719332328.637:458): pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.642133 kernel: audit: type=1103 audit(1719332328.639:459): pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.643281 kernel: audit: type=1006 audit(1719332328.639:460): pid=5674 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:18:48.643309 kernel: audit: type=1300 audit(1719332328.639:460): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe679033e0 a2=3 a3=7f83e5d1e480 items=0 ppid=1 pid=5674 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:48.639000 audit[5674]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe679033e0 a2=3 a3=7f83e5d1e480 items=0 ppid=1 pid=5674 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:48.649350 kernel: audit: type=1327 audit(1719332328.639:460): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:48.639000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:48.646306 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:48.652969 systemd-logind[1426]: New session 24 of user core. Jun 25 16:18:48.656795 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:18:48.658000 audit[5674]: USER_START pid=5674 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.659000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.663670 kernel: audit: type=1105 audit(1719332328.658:461): pid=5674 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.663705 kernel: audit: type=1103 audit(1719332328.659:462): pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.856107 sshd[5674]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:48.857000 audit[5674]: USER_END pid=5674 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.860640 kernel: audit: type=1106 audit(1719332328.857:463): pid=5674 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.860000 audit[5674]: CRED_DISP pid=5674 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.862192 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:18:48.863779 kernel: audit: type=1104 audit(1719332328.860:464): pid=5674 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:48.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.105:22-139.178.68.195:34532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:48.863009 systemd[1]: sshd@21-139.178.70.105:22-139.178.68.195:34532.service: Deactivated successfully. Jun 25 16:18:48.863521 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:18:48.864591 systemd-logind[1426]: Removed session 24. Jun 25 16:18:53.863266 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:18:53.863375 kernel: audit: type=1130 audit(1719332333.861:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.105:22-139.178.68.195:34542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:53.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.105:22-139.178.68.195:34542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:53.862078 systemd[1]: Started sshd@22-139.178.70.105:22-139.178.68.195:34542.service - OpenSSH per-connection server daemon (139.178.68.195:34542). Jun 25 16:18:54.335000 audit[5712]: USER_ACCT pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.338632 kernel: audit: type=1101 audit(1719332334.335:467): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.338681 sshd[5712]: Accepted publickey for core from 139.178.68.195 port 34542 ssh2: RSA SHA256:uCEMA6eklrDbJlaWYGGBho0uJsnDZmMHuEedAw3kMAg Jun 25 16:18:54.349214 kernel: audit: type=1103 audit(1719332334.338:468): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.349258 kernel: audit: type=1006 audit(1719332334.341:469): pid=5712 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:18:54.349330 kernel: audit: type=1300 audit(1719332334.341:469): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd51783350 a2=3 a3=7f2b56056480 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:54.349350 kernel: audit: type=1327 audit(1719332334.341:469): proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:54.338000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.341000 audit[5712]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd51783350 a2=3 a3=7f2b56056480 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:54.341000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:54.351528 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:54.361426 systemd-logind[1426]: New session 25 of user core. Jun 25 16:18:54.363754 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:18:54.366000 audit[5712]: USER_START pid=5712 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.368000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.371159 kernel: audit: type=1105 audit(1719332334.366:470): pid=5712 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.371199 kernel: audit: type=1103 audit(1719332334.368:471): pid=5715 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.505516 sshd[5712]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:54.509000 audit[5712]: USER_END pid=5712 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.509000 audit[5712]: CRED_DISP pid=5712 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.512812 systemd[1]: sshd@22-139.178.70.105:22-139.178.68.195:34542.service: Deactivated successfully. Jun 25 16:18:54.513365 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:18:54.514518 kernel: audit: type=1106 audit(1719332334.509:472): pid=5712 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.514561 kernel: audit: type=1104 audit(1719332334.509:473): pid=5712 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 16:18:54.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.105:22-139.178.68.195:34542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:54.514284 systemd-logind[1426]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:18:54.515036 systemd-logind[1426]: Removed session 25. Jun 25 16:18:56.156180 systemd[1]: run-containerd-runc-k8s.io-2f13a7a707c80ec9840afc83b4fc716af3455474b4407b19b54d80625db6afc0-runc.jdmClK.mount: Deactivated successfully.